Game Anim: Video Game Animation Explained [2 ed.] 9781000357806, 1000357805

The second edition of Game Anim expands upon the first edition with an all-new chapter on 2D and Pixel Art Animation, an

222 45 285MB

English Pages 328 [351] Year 2021

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Cover
Half Title
Title Page
Copyright Page
Dedication
Table of Contents
Preface
Acknowledgments
Author
Chapter 1 The Video Game Animator
What It Means to Be a Video Game Animator
Artistry and Creativity
Technical Ability
Teamwork
Design Sense
Accepting the Nature of the Medium
Life Experience
Different Areas of Game Animation
Player Character Animation
Facial Animation
Cinematics and Cutscenes
Technical Animation
Nonplayer Characters
Cameras
Environmental and Prop Animation
Required Software and Equipment
Digital Content Creation (DCC) Software
Game Engines
Reference Camera
Video Playback Software
Notepad
Chapter 2 The Game Development Environment
Finding the Right Fit
Studio Culture
Team Strengths
Game Pillars
Team Size
Team Dynamics
Game Animator Roles
Gameplay Animator
Cinematic Animator
Lead Animator
Animation Director
Principal Animator
Technical Animator
Animation Technical Director
Other Game Development Disciplines
Programmers
Artists
Design
Audio and Effects
Quality Assurance
Management
Public Relations and Marketing
A Video Game Project Overview
Phase 1: Conception
Phase 2: Pre-production
Phase 3: Production
Phase 4: Shipping
Phase 5: Post-Release
Chapter 3 The 12 Animation Principles
Principle 1: Squash and Stretch
Principle 2: Staging
Principle 3: Anticipation
Principle 4: Straight Ahead and Pose to Pose
Principle 5: Follow-Through and Overlapping Action
Principle 6: Slow-In and Slow-Out
Principle 7: Arcs
Principle 8: Secondary Action
Principle 9: Appeal
Principle 10: Timing
Principle 11: Exaggeration
Principle 12: Solid Drawings
Chapter 4 The Five Fundamentals of Game Animation
Feel
Response
Inertia and Momentum
Visual Feedback
Fluidity
Blending and Transitions
Seamless Cycles
Settling
Readability
Posing for Game Cameras
Silhouettes
Collision and Center of Mass/Balance
Context
Distinction vs Homogeneity
Repetition
Onscreen Placement
Elegance
Simplicity of Design
Bang for the Buck
Sharing and Standardization
Chapter 5 What You Need to Know
Basic Game Animation Concepts
Common Types of Game Animation
Cycles
Linear Actions
Transitions
Skeletons, Rigs, and Exporting to Game
How Spline Curves Work
Collision Movement
Forward vs Inverse Kinematics
Intermediate Game Animation Concepts
State Machines
Parametric Blending
Partial Animations
Additive Layers
Physics, Dynamics, and Ragdoll
Advanced Game Animation Concepts
Procedural Motion and Systems
Full-Body IK
Look-Ats
Blend Shapes
Muscle Simulation
Animated Textures/Shaders
Artificial Intelligence
Decision-Making
Pathfinding
Interview: Mark Grigsby
Animation Director—Call of Duty: Modern Warfare
Chapter 6 The Game Animation Workflow
Reference Gathering
Don’t Be Precious
Animate Pose to Pose Over Straight Ahead
Rough It In
Get It In-Game!
Iteration Is the Key to Quality
Blocking From Inside to Out
Pose-Sharing Libraries
Keep Your Options Open
Use Prefab Scenes
Avoiding Data Loss
Set Undo Queue to Max
Configure Auto-Save
Save Often
Version Control
Interview: Adrián Miguel
Animation Lead—GRIS
Chapter 7 Our Project: Pre Production
Style References
Defining a Style
Comparisons
Realism vs Stylized
Who Is the Character?
Previz
Gameplay Mock-Ups
Target Footage
Prototyping
Pitching the Game
Interview: Eric Chahi
Creator—Another World
Chapter 8 Our Project: Technical Animation
Character Setup
Modeling Considerations
Skinning
Rigging
Animation Sharing
File Management
File-Naming Conventions
Folder Organization
Referencing
Exporting
Export Data Format
Engine Export Rules
Animation Memory and Compression
Animation Slicing
In-Engine Work
Event Tags
Blend Timing
Scripting
Test Levels
Asset Housekeeping
Digital Content Creation Animation Tools
Interview: Masanobu Tanaka
Animation Director—The Last Guardian
Chapter 9 Our Project: Gameplay Animation
The Three Cs
Gameplay Cameras
Settings and Variables
Camera-Shake
Ground Movement
The All-Important Idle Animation
Seamlessly Looping Walk/Run Cycles
Animating Forward vs In Place
Inclines, Turning, and Exponential Growth
Strafing
Starts, Stops, and Other Transitions
Ten Common Walk/Run Cycle Mistakes
Jumping
Arcs
Take-Off
Landing
Climbing and Mantling
Height Variations and Metrics
Collision Considerations
Cut Points and Key Poses
Alignment
Attack Animations
Anticipation vs Response
Visual Feedback
Telegraphing
Follow-Through and Overlapping Limbs
Cutting Up Combos
Readability of Swipes Over Stabs
Damage Animations
Directional and Body-Part Damage
Contact Standardization
Synced Damages
Recovery Timing and Distances
Impact Beyond Animation
Interview: Mariel Cartwright
Animation Lead—Skullgirls
Chapter 10 Our Project: Cinematics and Facial
Cinematic Cameras
Field-of-View
Depth-of-Field
The Virtual Cameraman
The Five Cs of Cinematography
Cutscene Dos and Don’ts
The 180 Rule
Cut on an Action
Straddle Cuts with Camera Motion
Trigger Cutscenes on a Player Action
Avoid Player in Opening Shot
Use Cuts to Teleport
End Cutscenes Facing the Next Goal
Avoid Overlapping Game-Critical Information
Acting vs Exposition
Allow Interaction Whenever Possible
Avoid Full-Shot Ease-Ins/Outs
Track Subjects Naturally
Consider Action Pacing
Place Save Points After Cutscenes
Planning Cutscenes
Cutscene Storyboarding
Cutscene Previsualization
Cutscene Workload
Scene Prioritization
Cutscene Creation Stages
The Eyes Have It
Eyelines
IK vs FK Eyes
Saccades
Eye Vergence
Thought Directions
Lip-Sync
Phonemes
Shape Transitions
Facial Action Coding System
Sharing Facial Animation
Creating Quantities of Facial Animation
Troubleshooting Lip-Sync
Interview: Marie Celaya
Facial Animation Supervisor—Detroit: Become Human
Chapter 11 Our Project: Motion Capture
Do You Even Need Mocap?
How Mocap Works
Different Mocap Methods
Optical Marker-Based
Accelerometer Suits
Depth Cameras
Performance Capture
The Typical Mocap Pipeline
Mocap Retargeting
Mocap Shoot Planning
Shot List
Ordering/Grouping Your Shots
Rehearsals
Mocap Previz
Working with Actors
Casting
Directing Actors
Props and Sets
Prop Recording
Set Building
Virtual Cameras
Getting the Best Take
Working With Mocap
Retiming
Pose Exaggeration
Offset Poses
Hiding Offset Pose Deltas
Blending and Cycling
Motion Matching
Planning A Motion-Matching Mocap Shoot
The Motion-Matching Shot List
Naming Convention
Core Idles and Movement
Directional Starts and Stops
Pivot Turns
Strafe Direction Changes
Strafe Diamonds and Squares
Strafe Starts and Stops
Turn on the Spot
Repositions
Turning Circles
Snaking
Wild Takes
Interview: Bruno Velazquez
Animation Director—God of War
Chapter 12 Our Project: Animation Team Management
Scheduling
Front-Loading
Prioritizing Quality
De-Risking
Predicting Cuts and Changes
Adaptive Schedules
Conflicts and Dependencies
Milestones
Teamwork
Collaboration
Leadership
Mentorship
Hiring
The Animation Critique
Outsourcing
Interview: Yoshitaka Shimizu
NPC Animation Lead—Metal Gear Solid Series
Chapter 13 Our Project: Polish and Debug
Closing Stages of a Project
Alpha
Beta
Release Candidates and Gold Master
Animation Polish Hints and Tips
Foot-sliding
Popping
Contact Points
Momentum Inconsistency
Interpenetration
Targeted Polishing
Memory Management and Compression
Debugging Best Practices
Test/Modify Elements One by One
Version Control Comments
Avoid Incrementally Fine-Tuning
Troubleshooting
Interview: Alex Drouin
Animation Director—Assassin’s Creed, Prince of Persia: The Sands of Time
Chapter 14 2D and Pixel Art Animation
A Brief History of 2D Game Animation
Why Choose 2D Over 3D?
Pros
Cons
Different 2D Production Approaches
Pixel Art Animation
Traditional Drawings
Rotoscoping
Modular Animation/Motion Tweening
Understanding Historical Limitations
Screen Resolution
Character/Tile Size
Palettes
Sprite Sheets
Retro Case Study: Shovel Knight
2D Game Animation DCCs and Engines
Editor Screen Layout
Required 2D Software
Pixel Art Animation: Aseprite
2D Art All-Rounder: Photoshop
Modular Animation: Spine
Sprite Sheet Editor: Texture Packer
Game Engine: Game Maker Studio
General 2D Workflow Tips
2D and Pixel Art Game Animation Concepts
Outline Clean-up
Coloring
Sub-Pixel Animation
Character Design Considerations
Framerate
Frame Count
Modular Animation Hybrid Workflow
Onion-Skinning
Isometric Sprites
Hitbox vs Hurtbox
Background Animation
Parallax Scrolling
2D Visual Effects Animation
Modern Case Study: Streets of Rage 4
Interview: Ben Fiquet
Art Director & Animation: Streets of Rage 4
Chapter 15 The Future
Getting a Job
The Game Animation Demo Reel
What to Include
Editing Your Reel
The Reel Breakdown
Your Résumé
Your Web Presence
The Animation Test
Incoming Technologies
Virtual and Augmented Reality
Affordable Motion Capture
Runtime Rigs
In-Game Workflow
Procedural Movement
Machine Learned Motion
Remote Working
Index
Recommend Papers

Game Anim: Video Game Animation Explained [2 ed.]
 9781000357806, 1000357805

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Game Anim

Game Anim

Video Game Animation Explained 2nd edition

Jonathan Cooper

Second edition published 2021 by CRC Press 6000 Broken Sound Parkway NW, Suite 300, Boca Raton, FL 33487-2742 and by CRC Press 2 Park Square, Milton Park, Abingdon, Oxon, OX14 4RN © 2021 Taylor & Francis Group, LLC First edition published by CRCPress 2019 CRC Press is an imprint of Taylor & Francis Group, LLC Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint. Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers. For permission to photocopy or use material electronically from this work, access www​.copyright​.com or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. For works that are not available on CCC please contact mpkbookspermissions​@tandf​.co​​.uk Trademark notice: Product or corporate names may be trademarks or registered trademarks and are used only for identification and explanation without intent to infringe. ISBN: 978-0-367-70770-5 (hbk) ISBN: 978-0-367-70765-1 (pbk) ISBN: 978-1-003-14789-3 (ebk) Typeset in Myriad Pro by Deanta Global Publishing Services, Chennai, India

For Clara and Jade, whose adventures in real life inspire me in the virtual.

Contents Preface....................................................................................................................................... xvii Acknowledgments.................................................................................................................xix Author........................................................................................................................................ xxi Chapter 1:  The Video Game Animator......................................................... 1 What It Means to Be a Video Game Animator . . . . . . . . . . . . . . . . . . . 1 Artistry and Creativity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Technical Ability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Teamwork . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Design Sense . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Accepting the Nature of the Medium . . . . . . . . . . . . . . . . . . . . . . . . 3 Life Experience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Different Areas of Game Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Player Character Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Facial Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Cinematics and Cutscenes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Technical Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Nonplayer Characters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6 Cameras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Environmental and Prop Animation . . . . . . . . . . . . . . . . . . . . . . . . . 7 Required Software and Equipment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Digital Content Creation (DCC) Software . . . . . . . . . . . . . . . . . . . . 8 Game Engines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Reference Camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Video Playback Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Notepad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Chapter 2  The Game Development Environment......................................11 Finding the Right Fit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Studio Culture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Team Strengths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Game Pillars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Team Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Team Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Game Animator Roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Gameplay Animator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Cinematic Animator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Lead Animator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Animation Director . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Principal Animator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Technical Animator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

vii

Contents Animation Technical Director . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Other Game Development Disciplines . . . . . . . . . . . . . . . . . . . . . . . . . 17 Programmers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18 Artists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Audio and Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Quality Assurance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Public Relations and Marketing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 A Video Game Project Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Phase 1: Conception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Phase 2: Pre-production . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 Phase 3: Production . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Phase 4: Shipping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Phase 5: Post-Release . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 Chapter 3:  The 12 Animation Principles.................................................... 29 Principle 1: Squash and Stretch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 Principle 2: Staging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Principle 3: Anticipation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Principle 4: Straight Ahead and Pose to Pose . . . . . . . . . . . . . . . . . . . 32 Principle 5: Follow-Through and Overlapping Action . . . . . . . . . . 33 Principle 6: Slow-In and Slow-Out . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 Principle 7: Arcs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Principle 8: Secondary Action . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Principle 9: Appeal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Principle 10: Timing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Principle 11: Exaggeration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Principle 12: Solid Drawings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Chapter 4:  The Five Fundamentals of Game Animation...........................41 Feel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .42 Inertia and Momentum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 Visual Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 Fluidity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Blending and Transitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Seamless Cycles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 Settling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Readability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 Posing for Game Cameras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 Silhouettes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 Collision and Center of Mass/Balance . . . . . . . . . . . . . . . . . . . . . . 50 Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 Distinction vs Homogeneity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 Repetition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 Onscreen Placement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .52 Elegance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 viii

Contents Simplicity of Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 Bang for the Buck . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 Sharing and Standardization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 Chapter 5:  What You Need to Know........................................................... 57 Basic Game Animation Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 Common Types of Game Animation . . . . . . . . . . . . . . . . . . . . . . . . 57 Cycles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 Linear Actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 Transitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 Skeletons, Rigs, and Exporting to Game . . . . . . . . . . . . . . . . . . . . 59 How Spline Curves Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 Collision Movement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 Forward vs Inverse Kinematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 Intermediate Game Animation Concepts . . . . . . . . . . . . . . . . . . . . . . 70 State Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Parametric Blending . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Partial Animations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 Additive Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 Physics, Dynamics, and Ragdoll . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 Advanced Game Animation Concepts . . . . . . . . . . . . . . . . . . . . . . . . . 76 Procedural Motion and Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 Full-Body IK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 Look-Ats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 Blend Shapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 Muscle Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 Animated Textures/Shaders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 Artificial Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 Decision-Making . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 Pathfinding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 Interview: Mark Grigsby............................................................................... 83 Animation Director—Call of Duty: Modern Warfare . . . . . . . . . . . . 83 Chapter 6:  The Game Animation Workflow............................................... 87 Reference Gathering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 Don’t Be Precious . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 Animate Pose to Pose Over Straight Ahead . . . . . . . . . . . . . . . . . . . . 90 Rough It In . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 Get It In-Game! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 Iteration Is the Key to Quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 Blocking From Inside to Out . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 Pose-Sharing Libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 Keep Your Options Open . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 Use Prefab Scenes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 Avoiding Data Loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 Set Undo Queue to Max . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 Configure Auto-Save . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 ix

Contents Save Often . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 Version Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 Interview: Adrián Miguel.............................................................................. 99 Animation Lead—GRIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 Chapter 7:  Our Project: Pre Production....................................................103 Style References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 Defining a Style . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 Comparisons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 Realism vs Stylized . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 Who Is the Character? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 Previz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 Gameplay Mock-Ups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 Target Footage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 Prototyping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 Pitching the Game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 Interview: Eric Chahi.................................................................................... 115 Creator—Another World . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 Chapter 8:  Our Project: Technical Animation........................................... 117 Character Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 Modeling Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 Skinning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 Rigging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 Animation Sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 File Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 File-Naming Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 Folder Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 Referencing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 Exporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 Export Data Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 Engine Export Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 Animation Memory and Compression . . . . . . . . . . . . . . . . . . . . . 131 Animation Slicing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .131 In-Engine Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 Event Tags . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 Blend Timing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 Scripting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 Test Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 Asset Housekeeping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 Digital Content Creation Animation Tools . . . . . . . . . . . . . . . . . . . . 135 Interview: Masanobu Tanaka......................................................................137 Animation Director—The Last Guardian . . . . . . . . . . . . . . . . . . . . . . 137

x

Contents Chapter 9:  Our Project: Gameplay Animation.........................................139 The Three Cs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 Gameplay Cameras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 Settings and Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 Camera-Shake . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 Ground Movement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 The All-Important Idle Animation . . . . . . . . . . . . . . . . . . . . . . . . . 144 Seamlessly Looping Walk/Run Cycles . . . . . . . . . . . . . . . . . . . . . 145 Animating Forward vs In Place . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 Inclines, Turning, and Exponential Growth . . . . . . . . . . . . . . . . 149 Strafing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 Starts, Stops, and Other Transitions . . . . . . . . . . . . . . . . . . . . . . . 151 Ten Common Walk/Run Cycle Mistakes . . . . . . . . . . . . . . . . . . . . 153 Jumping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 Arcs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 Take-Off . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 Landing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 Climbing and Mantling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 Height Variations and Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 Collision Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 Cut Points and Key Poses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 Alignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 Attack Animations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 Anticipation vs Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 Visual Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 Telegraphing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 Follow-Through and Overlapping Limbs . . . . . . . . . . . . . . . . . . 161 Cutting Up Combos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 Readability of Swipes Over Stabs . . . . . . . . . . . . . . . . . . . . . . . . . . 164 Damage Animations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 Directional and Body-Part Damage . . . . . . . . . . . . . . . . . . . . . . . 164 Contact Standardization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 Synced Damages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 Recovery Timing and Distances . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 Impact Beyond Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 Interview: Mariel Cartwright......................................................................169 Animation Lead—Skullgirls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 Chapter 10:  Our Project: Cinematics and Facial.......................................171 Cinematic Cameras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 Field-of-View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 Depth-of-Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 The Virtual Cameraman . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 The Five Cs of Cinematography . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 Cutscene Dos and Don’ts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 The 180 Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 Cut on an Action . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 xi

Contents Straddle Cuts with Camera Motion . . . . . . . . . . . . . . . . . . . . . . . . 177 Trigger Cutscenes on a Player Action . . . . . . . . . . . . . . . . . . . . . . 177 Avoid Player in Opening Shot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 Use Cuts to Teleport . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 End Cutscenes Facing the Next Goal . . . . . . . . . . . . . . . . . . . . . . 177 Avoid Overlapping Game-Critical Information . . . . . . . . . . . . . 178 Acting vs Exposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 Allow Interaction Whenever Possible . . . . . . . . . . . . . . . . . . . . . . 178 Avoid Full-Shot Ease-Ins/Outs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 Track Subjects Naturally . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 Consider Action Pacing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 Place Save Points After Cutscenes . . . . . . . . . . . . . . . . . . . . . . . . . 180 Planning Cutscenes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 Cutscene Storyboarding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 Cutscene Previsualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 Cutscene Workload . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 Scene Prioritization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 Cutscene Creation Stages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 The Eyes Have It . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 Eyelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 IK vs FK Eyes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 Saccades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 Eye Vergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 Thought Directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 Lip-Sync . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188 Phonemes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188 Shape Transitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 Facial Action Coding System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 Sharing Facial Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 Creating Quantities of Facial Animation . . . . . . . . . . . . . . . . . . . 191 Troubleshooting Lip-Sync . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 Interview: Marie Celaya...............................................................................193 Facial Animation Supervisor—Detroit: Become Human . . . . . . . 193 Chapter 11:  Our Project: Motion Capture.................................................197 Do You Even Need Mocap? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 How Mocap Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 Different Mocap Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 Optical Marker-Based . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 Accelerometer Suits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 Depth Cameras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 Performance Capture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 The Typical Mocap Pipeline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202 Mocap Retargeting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202 Mocap Shoot Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 Shot List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 Ordering/Grouping Your Shots . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 xii

Contents Rehearsals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 Mocap Previz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 Working with Actors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 Casting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 Directing Actors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 Props and Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 Prop Recording . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 Set Building . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 Virtual Cameras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 Getting the Best Take . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 Working With Mocap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 Retiming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214 Pose Exaggeration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 Offset Poses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216 Hiding Offset Pose Deltas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 Blending and Cycling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 Motion Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 Planning A Motion-Matching Mocap Shoot . . . . . . . . . . . . . . . 221 The Motion-Matching Shot List . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 Naming Convention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222 Core Idles and Movement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222 Directional Starts and Stops . . . . . . . . . . . . . . . . . . . . . . . . . . . 222 Pivot Turns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 Strafe Direction Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 Strafe Diamonds and Squares . . . . . . . . . . . . . . . . . . . . . . . . . . 224 Strafe Starts and Stops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 Turn on the Spot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 Repositions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 Turning Circles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 Snaking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 Wild Takes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 Interview: Bruno Velazquez....................................................................... 229 Animation Director—God of War . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 Chapter 12:  Our Project: Animation Team Management....................... 233 Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 Front-Loading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 Prioritizing Quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 De-Risking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 Predicting Cuts and Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 Adaptive Schedules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 Conflicts and Dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 Milestones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 Teamwork . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 Collaboration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 Leadership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 Mentorship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 xiii

Contents Hiring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 The Animation Critique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 Outsourcing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 Interview: Yoshitaka Shimizu.....................................................................245 NPC Animation Lead—Metal Gear Solid Series . . . . . . . . . . . . . . . 245 Chapter 13:  Our Project: Polish and Debug..............................................249 Closing Stages of a Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 Alpha . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250 Beta . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250 Release Candidates and Gold Master . . . . . . . . . . . . . . . . . . . . . . 250 Animation Polish Hints and Tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 Foot-sliding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 Popping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 Contact Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 Momentum Inconsistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 Interpenetration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254 Targeted Polishing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 Memory Management and Compression . . . . . . . . . . . . . . . . . . 255 Debugging Best Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 Test/Modify Elements One by One . . . . . . . . . . . . . . . . . . . . . . . . 257 Version Control Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 Avoid Incrementally Fine-Tuning . . . . . . . . . . . . . . . . . . . . . . . . . . 258 Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 Interview: Alex Drouin.................................................................................263 Animation Director—Assassin’s Creed, Prince of Persia: The Sands of Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 Chapter 14:  2D and Pixel Art Animation...................................................267 A Brief History of 2D Game Animation . . . . . . . . . . . . . . . . . . . . . . . . 267 Why Choose 2D Over 3D? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 Pros . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 Cons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 Different 2D Production Approaches . . . . . . . . . . . . . . . . . . . . . . . . . 271 Pixel Art Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 Traditional Drawings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 Rotoscoping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 Modular Animation/Motion Tweening . . . . . . . . . . . . . . . . . . . . 273 Understanding Historical Limitations . . . . . . . . . . . . . . . . . . . . . . . . . 274 Screen Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274 Character/Tile Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 Palettes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 Sprite Sheets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278 Retro Case Study: Shovel Knight . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278 2D Game Animation DCCs and Engines . . . . . . . . . . . . . . . . . . . . . . 280 Editor Screen Layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281 xiv

Contents Required 2D Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281 Pixel Art Animation: Aseprite . . . . . . . . . . . . . . . . . . . . . . . . . . 282 2D Art All-Rounder: Photoshop . . . . . . . . . . . . . . . . . . . . . . . . 282 Modular Animation: Spine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283 Sprite Sheet Editor: Texture Packer . . . . . . . . . . . . . . . . . . . . . 283 Game Engine: Game Maker Studio . . . . . . . . . . . . . . . . . . . . . 284 General 2D Workflow Tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284 2D and Pixel Art Game Animation Concepts . . . . . . . . . . . . . . . . . . 286 Outline Clean-up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286 Coloring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287 Sub-Pixel Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288 Character Design Considerations . . . . . . . . . . . . . . . . . . . . . . . . . 289 Framerate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290 Frame Count . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290 Modular Animation Hybrid Workflow . . . . . . . . . . . . . . . . . . . . . . . . 291 Onion-Skinning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292 Isometric Sprites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293 Hitbox vs Hurtbox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295 Background Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296 Parallax Scrolling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298 2D Visual Effects Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299 Modern Case Study: Streets of Rage 4 . . . . . . . . . . . . . . . . . . . . . . . . 300 Interview: Ben Fiquet................................................................................. 303 Art Director & Animation: Streets of Rage 4 . . . . . . . . . . . . . . . . . . . 303 Chapter 15:  The Future.............................................................................. 305 Getting a Job . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 The Game Animation Demo Reel . . . . . . . . . . . . . . . . . . . . . . . . . 306 What to Include . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306 Editing Your Reel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307 The Reel Breakdown . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308 Your Résumé . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309 Your Web Presence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310 The Animation Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311 Incoming Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312 Virtual and Augmented Reality . . . . . . . . . . . . . . . . . . . . . . . . . . . 312 Affordable Motion Capture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313 Runtime Rigs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314 In-Game Workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314 Procedural Movement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 Machine Learned Motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 Remote Working . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317

xv

Preface This second edition of Game Anim: Video Game Animation Explained has been something a labor of love for me as I researched the medium’s origins with an all-new chapter on “2D and Pixel Art Animation.” After all, I myself began animating with what’s now known as “pixel art” as a kid in the early1990s, painstakingly recreating Street Fighter II characters’ poses from screenshots in fighting-game tips books, and mocking up level art tile-sets in Deluxe Paint III and IV on my Commodore Amiga. Toward the end of high school I was incredibly fortunate to be invited to my local game studio back in Scotland, (DMA Design, the original creators of Grand Theft Auto) after sending in a portfolio on floppy disk, but upon visiting was utterly devastated to be informed that games were moving away from pixels, instead adopting the blocky characters and stretched textures of early video game 3D. Every trick and technique I’d spent years learning was all for naught. Soon after, I sold my computer and gave up on games. Instead, I went to art college and discovered traditional animation, making crude animations on everything from zoetropes and phenakistiscopes, to cels and stop-motion. I never even considered making video games for several years, until the day I discovered the fluid motion capture of the Tekken series and fell in love all over again. Characters were finally moving as fluidly as they did at the pinnacle of 2D arcade games. Luckily for me, my school had just opened a lab of cutting-edge (for the time) silicon graphics machines which no one else was using, and so I came to love animating 3D video games. All this to say: I just hit my 20th year animating games professionally and I’m more excited now about the current state and future of video game animation than ever. This is because of the breadth of experiences available to players now, enabling a wide variety of creative opportunities for game animators. Beautiful 2D and pixel art animation has seen a resurgence in smaller indie titles, and has become something of a touchstone for video games across the greater cultural landscape due to association with what are now classic games, and the ease with which aspiring animators can create and share their work. This new Chapter 14, “2D and Pixel Art Animation,” reflects my strong belief that now is the time to capture and codify the work practices of decades of 2D game artists and animators, to which end I’ve enlisted the knowledge and expertise of some of the best 2D and pixel animators working in games today. There’s just something so satisfying about the meticulous placement of pixels on the screen and the liberation that comes with being able to bring to life anything you can imagine without the need for modeling or rigging. This freedom is further enhanced by the simplicity and ease of use of 2D animation software, (not to mention the affordable price compared to 3D), all lending itself to the minimalist, almost impressionistic aesthetic of xvii

Preface the 2D games of today, especially when juxtaposed with the biggest AAA blockbusters on which I’ve spent most of my career. But video game animation has never been about looking backward, so there’s also a whole new section in Chapter 11, “Our Project: Motion Capture,” covering the developing technique of motion matching. Now that motion matching has shipped in several games, this is practical experience laid out in an example shot list, as well as an attempt to dispel some misunderstandings about the technology formed back when it was purely theoretical. This is, of course, in addition to everything present in the first edition—essentially rounding out this book’s contents into an even more complete explanation of all areas of video game animation. As ever, I really do hope you find this book useful and can’t wait to see what the aspiring animators of tomorrow will create. It’s never been easier to begin making video game animation—an exciting medium entertaining more people around the world now than ever. Jonathan Cooper August 2020

xviii

Acknowledgments Thanks once more to everyone that contributed toward image licensing for the first and second editions, especially to Angella Austin for tolerating my months of email hustling. Importantly, the new content would have been impossible without extensive conversations with the following experts in the field: Nick Wozniak, Kay Yu, Ben Fiquet, Cyrille Lagarigue, Mariel Cartwright, Adrián Miguel, Roger Mendoza, Nicolas Leger, Maksym Zhuravlov, and Kristjan Zadziuk. Special mention also to Brian Gatewood for the fantastic AZRI pixel animation, Matthew Bachnick for her new weapons, and of course Sol Brennan for their continued rework of the free AZRI Rig.

Additional Legal Notices The cover “AZRI” character is property of Matthew Bachnick and is used under license, and can not be used for commercial purposes. The Last Guardian: © 2016 Sony Interactive Entertainment Inc. For Assassin’s Creed pictures: “© 2007–2017 Ubisoft Entertainment. All Rights Reserved. Assassin’s Creed, Ubisoft, and the Ubisoft logo are trademarks of Ubisoft Entertainment in the US and/or other countries.” For Prince of Persia’s pictures : “© 2003 Ubisoft Entertainment. Based on Prince of Persia® created by Jordan Mechner. Prince of Persia is a trademark of Waterwheel Licensing LLC in the US and/or other countries used under license.” For Watch Dogs pictures : “© 2014 Ubisoft Entertainment. All Rights Reserved. Watch Dogs, Ubisoft and the Ubisoft logo are registered or unregistered trademarks of Ubisoft Entertainment in the U.S. and/or other countries.”

xix

Author Jonathan Cooper is a video game animator from Scotland who has been bringing virtual characters to life since 2000. He has led teams on large projects such as the Assassin’s Creed and Mass Effect series, with a focus on memorable stories and characters and cutting-edge video game animation. He has since been focusing on interactive cinematics in the latest chapters of the DICE and Annie award-winning series Uncharted and The Last of Us. In 2008, Jonathan started the BioWare Montreal studio, leading a team to deliver the majority of cinematic cutscenes for the 2011 British Academy of Film and Television Award (BAFTA) Best Game, Mass Effect 2, and in 2013 he directed the in-game team that won the Academy of Interactive Arts and Sciences (AIAS/DICE) award for Outstanding Achievement in Animation on Assassin’s Creed III. Jonathan has presented at the Game Developers Conference (GDC) in San Francisco and at other conferences across Canada and the United Kingdom, and holds a Bachelor of Design honors degree in animation. You can follow him online at his website, www​.gameanim​.com, and on Twitter at @GameAnim.

xxi

Chapter 1

The Video Game Animator What It Means to Be a Video Game Animator So you want to be a video game animator, but what exactly does that entail? And what, if any, are the differences between a video game animator and those in the more traditional linear mediums of film and television? While there are certainly a lot of overlap and shared skills required to bring a character to life in any medium, there are many unique technical limitations, and opportunities, in the interactive artform.

Artistry and Creativity To begin with, having a keen eye for the observation of movement in the world around you (and a desire to replicate and enhance it for your own creative ends) is the first step to becoming a great game animator. The willingness to not only recreate these motions but to envision how this movement might be controlled by yourself and others, allowing players to embody the characters you create, is a key factor in separating game animators from the non-interactive animators of linear mediums.

1

Game Anim: Video Game Animation Explained Understanding key fundamentals of weight, balance, mass, and momentum to ensure your characters are not only pleasing to the eye but meet with the player’s understanding of the physics of the worlds they will inhabit are equally essential. A desire to push the envelope of visual and interaction fidelity within your explorable worlds, which can afford players new stories and experiences they could never have in the real world, with believable characters that are as real to them as any created in another medium, is a driving force in pushing this still-young medium forward. The ultimate goal is immersion—where players forget they are in front of a screen (or wearing a virtual/augmented-reality headset), escaping their own physical limitations and instead being transported into our virtual world, assuming their character’s identity such that it is “they themselves” (and no longer their avatar) who are in the game.

Technical Ability Beautiful animations are only the first challenge. Getting them to work in the game and play with each other seamlessly in character movement systems is the real challenge. The best game animators get their hands dirty with the technical side of seeing their animations through every step of the way into the game. A good game animation team will balance animators with complementary levels of technical and artistic abilities, but strength in both areas is only ever a good thing. Only in thoroughly understanding tools, processes, and existing animation systems will new creative opportunities open up to animators willing to

Animation inside a game engine. 2

The Video Game Animator experiment and discover new techniques and methods that might make animation creation more efficient or increase quality.

Teamwork Beyond simply making motions look clean and fluid, it is a game animator’s responsibility to balance multiple (sometimes conflicting) desires to make a video game. A finished game is always more than the sum of its parts, and when all of a development team’s disciplines pull in the same direction in unison this when we delight and surprise players the most. Animators must work in concert with designers, programmers, artists, audio technicians, and more to bring their creations to life, so those harboring a desire to sit with headphones on and the door closed, focusing solely on their own area, will be quickly left behind in the race to create the best possible experiences. A game animator can only truly succeed with a good awareness of the other disciplines in game development and the ability to speak their language, empathize with their needs, and know at least a little of all areas of game development.

Design Sense Game animations do not exist in a bubble and are not simply created to look good, but must serve a purpose for the greater game. Animators handling player character animation, especially, must balance a game’s “feel” with visual fidelity (though the two are not mutually exclusive). Designers touting conventional wisdom will often fall back on the tenet of quicker animations equaling better and more reactive characters, but go too fast without the appropriate visual feedback and the characters will simply not exist believably in the world, destroying the illusion of life and hurting the tactile gameplay “feel” in the opposing direction. Ultimately, it is a game animator’s responsibility to create consistency in the game world, with everything displaying a relative weight and physics, with gravity being a constant throughout. In game development, we might hope that “everyone is a designer,” but the best game designers are the keepers of the game’s goals with an idea of how to reach them. It is the game animators’ role to know enough of design to ensure their creations do not hurt but serve the design goals while maintaining visual fidelity as much as possible.

Accepting the Nature of the Medium It goes without saying that a great game animator must be passionate about their chosen field, but they must also understand that this chosen field is not just animation but game development as a whole.

3

Game Anim: Video Game Animation Explained Those wishing for the more easily scheduled approach of traditional linear animation production will likely grow frustrated with the fluid nature of game development. You cannot plan how many iterations it will take a new mechanic to be fun, so it follows that you must always be open to schedules in a state of flux. Avoid being precious about your work because it will change or be thrown away, but, similarly, don’t be dissuaded, because you will always improve and refine your animation as the game progresses, no matter how many times you might rework it.

Life Experience The best game animators love playing games and can find something to learn from every work, but they also go beyond simply referencing other games or movies. If we wish to truly improve our artistic works (and gaming as a whole), we must escape the echo chamber of comparing with and copying our peers and instead bring as much of our own varied life experience into our work as possible. The blandest games are those that only reference their competition, and the most pedestrian animation choices are inspired only by other animations. Be passionate about games, but also be passionate about life and the world around you, and get away from the screen outside of work as much as possible.

Different Areas of Game Animation While game animators in larger teams typically specialize, those at smaller studios may wear the many hats listed below. Regardless, even when specializing, it is incredibly valuable to understand other areas of game animation to open up opportunities for creativity across disciplines—often, the best results occur when lines are blurred such that an animator might excel in all moving aspects of a game.

Player Character Animation The primary and easily the most challenging aspect of game animation is the motion of characters under the player’s control. This occurs in all but the most abstract of games and therefore is an important skill to focus on and for any game animator to have under his or her belt. Character animation style and quality can vary greatly across different game types (and studios), depending upon their unique goals, but one thing is becoming more apparent as the medium progresses—bad character animation is unacceptable these days. Bringing up the baseline standard is one of the main goals of this book.

4

The Video Game Animator

The Assassin is an excellent example of player character movement. (Copyright 2007–2017 Ubisoft Entertainment. All Rights Reserved. Assassin’s Creed, Ubisoft, and the Ubisoft logo are trademarks of Ubisoft Entertainment in the US and/or other countries.)

Facial Animation Facial animation is a relatively recent requirement, due to advances in the quality of characters. When we bring the cameras in close enough (especially on more realistic faces) even the most undiscerning player will be able to instinctively critique the facial motion due to experience with other humans.

Great facial animation is a crucial element of story-based games like The Last of Us. (Courtesy of Sony Interactive Entertainment.) 5

Game Anim: Video Game Animation Explained How do we avoid these pitfalls when aiming to create believable characters that serve our storytelling aspirations? There are many decisions throughout a project’s development that must work in concert to bring characters to life that are not just believable, but appealing.

Cinematics and Cutscenes A mainstay of games with even the slightest degree of storytelling, cinematic cutscenes give developers the rare opportunity to author scenes of a game enough so that they play out exactly as they envision. A double-edged sword, when used sparingly and done well, they can bring us much closer to empathizing with characters, but used too much and they divorce us from not just our protagonists but the story and experience as a whole. A well-rounded game animator should have a working knowledge of cinematography, staging, and acting to tell stories in as unobtrusive and economical a manner as possible.

Technical Animation Nothing in games exists without some degree of technical wrangling to get it working, and game creation never ceases to surprise in all the ways the game can break. A game animator should have at least a basic knowledge of the finer details of character creation, rigging, skinning, and implementation into the game—even more so if on a small team where the animator typically assumes much of this responsibility alone. A game animator’s job only truly begins when the animation makes it into the game—at which point the systems behind various blends, transitions, and physical simulations can make or break the feel and fluidity of the character as a whole.

The character mesh, rig, and export skeleton.

Nonplayer Characters While generally aesthetically similar, the demands of player animations differ greatly from those of nonplayer characters. Depending on the goals and 6

The Video Game Animator design requirements of the game, they bring their own challenges, primarily with supporting artificial intelligence (AI), such as decision-making and moving through the world. Failing to realize NPCs to a convincing degree of quality can leave the player confused as to their virtual comrades’ and enemies’ intentions and believability.

Cameras The camera is the window through which the game world is viewed. Primarily concerning player character animation in 3D games, a bad camera can undermine the most fluidly animated character. A good game animator, while perhaps not directly controlling the implementation, should take a healthy interest in the various aspects of camera design: how it reacts to the environment (colliding with walls, etc.), the rotation speed and general rigidity with which it follows player input, and the arc it takes as it pivots around the character in 3D games. It’s no wonder a whole new input (joypad right-stick) was added in the last decade just to support the newly required ability to look around 3D environments.

Gameplay camera setup.

Environmental and Prop Animation Perhaps less glamorous than character animation, an animated environment can bring soulless locations to life. Moreover, a character’s interaction with props and the environment with convincing contact points can place a character in the environment to an unparalleled degree. Use of weapons, primarily guns and melee types, is a mainstay in games, and the knowledge required to efficiently and convincingly animate and maintain these types of animations is an essential part of most game animation pipelines. While doors, chests, and elevators might not be demo-reel material, they are all essential in the player’s discovery of a more interactive world. 7

Game Anim: Video Game Animation Explained

Many game characters require weapon props.

Required Software and Equipment Digital Content Creation (DCC) Software The primary method of animation content creation for video games has always been via expensive DCC packages such as Autodesk’s Maya and Max, but now they are facing competition from free offerings such as Blender that increasingly support game development. In the future, more and more of the creation aspect of game development is expected to take place within the game engine, but for now we still rely upon the workflow of first creating and then exporting assets into the game.

A character animated in Maya. 8

The Video Game Animator A good game animator will have at least a basic knowledge of polygon modeling, rigging, and skinning, as well as an intimate knowledge of the animation export process and its various abilities and limitations. A good understanding of the many ways to break an animation on export can save time and increase iteration, making the difference between an acceptable and exceptionally polished animated game.

Game Engines This is the software that wrangles together all the various game assets and data creation, from animation, art, and audio to level design and more, and spits out a complete video game. While most studios use their own in-house engines that cater most efficiently to the type of game they make, in recent years a race to democratize game development for everyone has culminated in the two most popular game engines, Unreal and Unity, going free. Offering a suite of tools to create and integrate all aspects of game creation, these professional-grade game engines are now freely available to everyone reading this book!

A typical game engine editor.

Reference Camera Even the best animators work to a video reference (this must be stated, as some juniors mistakenly see it as cheating), and these days every phone contains a video camera good enough to capture any actions you or your friends and colleagues can perform. Animating to a video reference will not 9

Game Anim: Video Game Animation Explained only save time by preventing basic mistakes in motion, but it is also the surest way to raise your animation quality as you better understand the physics and motion of the human body. While you can source references from the internet, the only way to truly get exactly what you need is to have someone film you acting it out. Get outside if there’s no room in the studio and get recording!

Video Playback Software While these come as standard in your computer’s operating system, it is important to use one that allows both scrubbing (easily moving the video forward and backward) and frame-stepping for detailed analysis of your chosen reference video. While an advanced functionality such as the ability to edit videos or choose a section to play on a loop is ideal, the most robust free video player for scrubbing animation is Apple’s Quicktime.

Notepad Game animators do much more than just think about animation and must take notes on everything from design decisions to workload lists. Keeping a notepad handy helps you visually hash out ideas as you discuss with other team members—it’s a visual medium, after all, and an agreed-upon diagram can speak a million more words than an easily misinterpreted idea when it is spoken. Taking a notepad and pencil/pen to every meeting and creative exchange will make your life so much easier as you often come back to ideas and tasks some time after they have been decided upon.

10

Chapter 2

The Game Development Environment Finding the Right Fit We can choose our friends but not our family, and while we can’t all choose where we work (at least initially), finding the right team will make a dramatic difference in your experience of game development.

Studio Culture Just as a church is not the building but the community inside it, a game is not made by a monolithic studio but instead a collection of individuals who come together to create. Team makeup is fluid and will change from project to project, but generally individuals housed within a studio are drawn to a particular style of game or studio ethos. It is the combined values of these individuals that make up a studio culture. Different studios have different strengths, weaknesses, and priorities based on this team culture, and this often flows from the original founders (and therefore founding principles) upon which the studio is based. Some studios focus on storytelling, some on graphics and technology, some on fun 11

Game Anim: Video Game Animation Explained gameplay over everything else. While none of these attributes are mutually exclusive, studios and teams within them will generally be stronger in one or a few areas depending on the team composition. Beyond different discipline strengths, work ethic, attention to detail, perfectionism, production methods, and work/life balance all vary from studio to studio. Again, while not mutually exclusive, it is impossible to balance all different approaches to game development, so it is important to find a studio environment that matches your values to be happy and sustain a long career in game development. For the ambitious game animator, it is important to find a team that values great characters and the desire to expend effort and resources, bringing them to life. Thankfully, while strong animation skills were for years seen as a benefit or a “nice-to-have,” it is all but becoming essential for studios with any kind of narrative ambitions. As such, recent years have seen an explosion of animation teams, whereas just a few years ago the entire workload might have been assumed by a much smaller or single-person department.

Team Strengths As said above, studios vary depending on the makeup of their team members, mostly depending on their initial founders. Studios and teams with a strong programming pedigree tend to favor technical excellence. Those with artists at the core tend to excel in incredible visuals and world-building. Those with an animation background place a desire to tell stories with memorable characters at the forefront. All of the strengths above cannot come at the expense of good gameplay, so a strong design sense is essential to any team. While this rightfully falls on the shoulders of the designers, a game will only benefit from everyone, including game animators, knowing as much about game design as possible. Because there is no formula for good game design, due to the youth of the medium, there is no rich history of training to become a great game designer (unlike established avenues like animation, programming, and art). As such, many designers and creative directors come from backgrounds in other fields. This means that even individual game designers will typically approach a project with a bent toward art or programming and so on. While junior animators are not yet in a position to choose, once established, a game animator seeking employment from various studios should consider if the team strengths match his or her own goals and aspirations.

Game Pillars From the team strengths generally comes the desire to focus on the “pillars” of a project. Pillars form the basis of any project and allow a team to zero in on what is important and what is not. Every idea that arises throughout the 12

The Game Development

Every game has game design pillars upon which it is built. development cycle can be held up as either supporting a pillar or not, and so can be endorsed and given priority, or, if not, then dismissed or given less priority. In a game where animation-heavy melee combat or traversal of the world is a core gameplay pillar, then animation will be more supported than in a project where the core pillars do not depend so much on animation. Similarly, interacting and talking with a variety of high-fidelity characters will also require that facial animation quality become core to the experience, whereas a game about driving cars would naturally focus elsewhere. Working in a team where animation quality is seen as a necessity is an entirely different experience from working on one that does not. Resources, such as programmer time, hiring a commensurate number of animators for the workload (animation quality often rests on the time allotted), and a willingness to listen to and take on the game animators’ ideas and requests, will be much more forthcoming on such a team. Naturally, it is not always possible to start a career in such a studio environment, but this is certainly something worth pursuing as an eventual goal. While it is rare to be able to pick and choose, it is important when looking for different studios and teams to join that the kinds of games which that studio makes resonate with you. As a game animator, it is highly likely that you prefer to play games with excellent animation quality, so this makes seeking out those teams and studios which produce this a natural fit.

Team Size The type of team size you aim for should be the one you are comfortable with. Large teams generally afford greater programming support and resources, allowing game animators to push the envelope in terms of visuals and believable characters. However, starting your career in such a team means you will only have access to a small piece of the pie, sometimes leading to being pigeonholed or specializing in one specific area of game animation. As such, it might be hard to gain a grasp of the various areas that make up overall game development—something that can only help you grow as a game developer. 13

Game Anim: Video Game Animation Explained

A mismatch occurs when a smaller studio attempts to take on the big guns of AAA without the means. Conversely, large teams with many moving parts have a hard time producing designs with the tight and concise visions of those from individual auteurs. Finding a team aware of its own limitations, and one that matches your own desires, is key.

Working on a small team and project allows for a much better overview of the entire game development process and will afford opportunities to try various aspects of not only game animation but likely character creation and rigging, visual effects, and so on. While this can make for a dynamic and varied career, you’ll never be able to create the volume of work required to bring the most complex characters to life and will not have the tools, technology, or support that are afforded by a larger team. That said, some of the most delightful games and characters come from the smallest teams when animation and characters are a core focus.

Team Dynamics Beyond strengths and weaknesses and a variable priority of animation as a focus, different studio and team makeups also vary in working practices. Time and time again, communication between team members is cited as a key area for improvement in project postmortems, but every team approaches this differently. While many focus on meetings, this can lead to too much time wasted that should be spent actually making the game. Some studios put the onus on individuals physically talking to one another, sometimes moving desks to work as close as possible to their most important contacts. Some teams are spread across the globe, so they turn to technology such as video conferencing and other solutions to interact virtually. As team sizes grow, some adopt layers of middle management to facilitate communication between ever-spread-apart teams that cannot track all the moving parts of a giant project. Regardless of approach, one thing is certain—good games are made by real people with real emotions and interactions, and the more human an interaction can be made, the better and more soulful a game will be. And yet, many great games have been made under conflict and extreme pressure, and some teams even work to reproduce this. While this is less than ideal (certainly no great game was ever created without at least a few heated disagreements and drama), such is the nature of different personalities working together creatively.

Game Animator Roles Within animation (and especially on large teams where more structure and hierarchy are generally required), there are typically the following positions

14

The Game Development available with their respective responsibilities. Every studio has its own structure, so titles will vary, with many opting for further granularity with senior and junior/associate roles to provide a clear career progression path, but let’s look at the different roles one can assume or aspire to as a game animator. The path you take will be unique to your own skills and preferences, and it is recommended to cultivate as much as possible an overview of all areas of animation and game development as a whole to become as rounded and versatile as possible—maximizing opportunities for the full length of your career.

Gameplay Animator The gameplay animator handles in-game animation, with responsibilities including player characters, nonplayer characters, and environmental animation such as doors and other props. The gameplay animator must have a keen design sense and, importantly, an awareness of the “feel” of the player character with regards to weight and responsiveness.

Frame-perfect gameplay animation in Street Fighter V. (Courtesy of Capcom.)

Cinematic Animator Cinematic animators generally focus on storytelling more than game design, and should have a good understanding of acting, staging, composition (if required to handle cameras), pacing, and other elements of traditional cinematography—not to mention facial animation.

Lead Animator Usually either gameplay or cinematic, a lead animator’s primary responsibility is to make sure the team delivers the best-quality animation they can within the time allotted. A sense of organization and communication with other 15

Game Anim: Video Game Animation Explained

Expect to become familiar with workload spreadsheets as a lead. disciplines allows the lead animator to keep the animation team on track and up to date with the changes on the project. A good lead will inspire and mentor animators that work under him or her, and, ultimately, the lead animator is at the service of the animation team— removing roadblocks to allow them to create their best work possible.

Google Docs is a great way to manage information across a project, allowing multiple team members to update documents from a decentralized repository—and it’s free!

Animation Director While most teams have an art director, only the largest teams tend to require a specific animation direction role. This is to separate the organizational role of the lead from the purely aesthetic pursuit of the director. The animation director is expected to follow-through on the artistic vision, continuity of style, and quality throughout the project while elevating individual animators’ skills through mentorship, insightful critique, and most of all example.

Principal Animator Rarer even than animation directors are principals. In order to offer an alternative career path to just management or direction, some studios 16

The Game Development will also offer a principal animator role that recognizes a director-like seniority but keeps the animator animating. Many animators who reach management-only positions rightly miss animation—the reason they started in the first place, after all. Not to mention moving a senior from the team into a management-only role means less senior-level content is being created.

Technical Animator A technical animator focuses more on the technical rather than the artistic side of game animation, though should ideally have an eye for both. They’ll typically handle elements such as rigging, skinning, scripts, tools, and pipeline efficiency to support the animators, allowing them to focus on creating animations with minimal issues.

Python is a popular scripting language for tool creation.

Animation Technical Director Again, generally only on larger projects, the animation technical director (TD) oversees the technical animation team and has a more teamwide impact on the tools and processes used for any one project. An animation TD will also be expected to create new technologies and workflows that allow previously unseen animations and rigs to come to fruition.

Other Game Development Disciplines Unless you’re an indie developer flying solo or a remote worker separate from the core team, every day as a game animator you will be interacting 17

Game Anim: Video Game Animation Explained with a variety of team members, each with their own expertise, personalities, priorities, and desires for their own work. Fortunately, they will all be working with you toward the same goal of making a great game! Often, it’s not about your own desires and sometimes not even about making the best animation, but what’s best for the game. This section explains how your daily interactions may differ with each discipline.

Programmers Gameplay animators interact with gameplay programmers on a daily basis to bring their playable characters and systems to life. While programmers are generally cut from a different cloth than animators (they prefer mathematics to motion), there is common ground when good programmers pursue the best character movement and control—especially those that focus on the end result that the player experiences. Other roles in programming range from gameplay, systems, tools, and graphics, with similar lead and director positions found in animation. As a project closes, the programmers will still be fixing bugs and stabilizing builds weeks after the animators have put down their pencils, so they should always have the team’s respect. While coding is becoming a more popular and common career path in schools, the very best programmers are worth their weight in gold.

Artists Animators will mostly be interacting with the character art team with regards to character creation, with TDs and technical animators providing a bridge between the disciplines. Characters should be designed and created with the animator’s input because it’s difficult to bring appeal to a character without any to start with, and, even at the concept phase, an animator’s eye can catch potential rigging and skinning problems long before they make it into the game. It is essential for animators to make suggestions at the characterdesign stage that promote a strong silhouette, making the animators’ job easier to achieve appeal from all angles, as well as suggesting moving elements like hair and cloth that might provide overlap and follow-through to add further visual interest to motions. Cinematic animators must be in constant communication with not just the character artists for faces, but also with the level/environment artists where their cutscenes take place to minimize nasty surprises that affect the preset cameras and acting in cinematics. Mirroring the relationship between level designers and level artists (where artistic license is sometimes taken to the detriment of gameplay), artists can also negatively affect an alreadycreated cutscene such as by dropping a tree in the middle of a scene where the characters might be walking. As ever, the only real solution is to ensure a healthy dialogue between departments. Other areas of art include user interface (UI), visual effects, and technical artists, all with a similar lead/senior hierarchy to animation. 18

The Game Development

While it makes sense in smaller studios, some large studios wrap animation under art rather than it being its own department, with the lead animator reporting to the art director. Unfortunately, this rarely results in exceptional animation due to the differences in expertise, so it should be a warning sign to animators wishing to push the envelope.

Design Design is heavily involved with player character movement and should always be pushing for the best “feel.” Naturally, their initial inclination is always toward speed of response, which presents the game animator with the challenge of still providing weight and fluidity to a character.

Level design overlaps with both gameplay and cinematic animation. In these types of trade-offs, response is king, but visual feedback is important, so it is the animator’s job to provide the player with the correct visual tells both before and after an action—instantaneous visual feedback can supplant a non-instantaneous result. Just as player awareness is “better” with an extremely far and wide camera, the connection between the player and avatar will be weakened as a result. As such, animators and designers must strike a balance between what feels good and what looks good when the two are at odds. In addition, gameplay animators on navigation-focused games must deliberate over metrics with level designers to ensure the player character’s movement makes sense within a world that is built to a sensible scale. The design department usually features roles such as game, level, technical, and systems design, with a hierarchy encompassing leads and directors, with game and creative directors at the very top, usually emerging from their ranks. However, as mentioned earlier, creative directors need not come from 19

Game Anim: Video Game Animation Explained a design background, though obviously they must have a strong design head to make well-considered decisions, and the makeup and style of the game will generally be a reflection of this.

Audio and Effects As the saying goes, “sound is 50% of a game,” and any animation that already looks good will be enhanced even more when the sound effects are added. The same can be said for the visual flair added to any attack by a visual swoosh that replicates the smearing or multiples of traditional 2D animation. Generally, most of a project is spent with minimal effects and no music, so the difference is apparent when they’re finally added in the later stages. Due to being downstream in the pipeline (sound and effects can only come after the visuals are in place), the audio and effects teams often face an intense amount of work at the end of the project, making all others seem light by comparison. As such, animations and cinematics generally have deadlines that run ahead of the real end date to ensure time is allocated to add these essentials, and cinematics especially work toward a “timing lock” date after which the animation can still be polished but not retimed, as it will throw off the other teams. It is a good idea for cinematic animators to maintain a healthy relationship with effects artists throughout development, as even while swamped at the end they still desire to make the best work possible, so any suggestions the animator might have to improve an action or cinematic are welcomed, time permitting. Details added days before shipping can bring a scene together in a way that is just impossible without.

Quality Assurance A strong quality assurance team is the vanguard in catching the multitude of bugs that will crop up over the course of a project, especially if tight schedules don’t allow them to be fixed as the project goes on. Due to their often-temporary contractual work, some teams do not treat their QA with the respect they deserve. A good QA team is invaluable in preventing broken characters and animations becoming internet memes on launch, so it should be kept in the loop as new gameplay systems are implemented. It is highly recommended to maintain your bug count throughout the project rather than leaving it all to the end because it’ll allow you to focus on polish in those final weeks rather than just making your animations work. Techniques for efficient bug-fixing of animations will be detailed later.

Management Be it your direct lead/supervisor or the project leadership, there is always some layer of leadership to handle the overall production schedule, vision for the project, and high-level studio decisions. The creative vision is often separated from the production schedule, as the two compete for time vs 20

The Game Development budget. The former would take all the time and money/resources in the world to complete a project to perfection, while the latter understands the reality that if projects go over budget and time, then the consequences can be severe, such as project cancelations or worse. Creatively, animators have animation leads, animation leads have animation directors, and animation directors have game and creative directors—each with a higher level of oversight of the entire project so they can make decisions from those positions to maintain consistency. Schedule-wise, animators have animation leads that may in turn have project managers and producers to oversee the project from a high level and ensure dependencies between other departments are maintained. There is often conflict between production and creativity, and as animators we naturally tend to favor the latter, but the reality is that if the scope of a project isn’t reined in, it simply won’t ever see the light of day. The ideal situation is a kind of organized chaos that leaves enough room for experimentation and creative risks without blowing the scope or schedule— as such, managers who come up through the ranks and have a firm handle on the intricacies of their own field generally excel over those with a background only in organization and project management.

Public Relations and Marketing You can animate the best game in the world, but it’s all for naught if no one knows about it. The marketing department is an essential element of any team that actually wishes their game to be seen or played by anybody. Animators will likely have a lot of interaction with marketing later in the project as it becomes time to promote the game, and in the video-heavy internet age, animations and cutscenes are a natural part of any marketing and launch strategy. No one knows the visual action of a game more than the animators, so be proactive and offer suggestions of the best scenes and moves to be included in any upcoming trailers to ensure they are flawlessly polished before going out into the world. There’s nothing worse than seeing an unfinished animation you were planning on polishing show up front and center in a trailer when just a little extra work would have made it perfect.

A Video Game Project Overview Now that we’ve covered the various types of roles that make up a game development team, let’s look at how a full game production is typically structured and what the animator’s role will likely be at each phase.

Phase 1: Conception Before any characters are modeled and animated, before a line of code is written, a game starts with an idea. This stage can vary wildly by how that 21

Game Anim: Video Game Animation Explained idea comes about. Is it a brand-new intellectual property (IP)? Is it a sequel? Does it center on a new and unique gameplay mechanic? Is it a personal story from the creative director just waiting to burst out? From all of these birthplaces, an idea is only that until it gets written down outside the mind of the creator and begins to take shape, often with the input of a core team of creatives fortunate enough to be involved in the initial stages of what might be something enjoyed by millions of players around the world in the future. It may start with something as simple as an image scrawled on a napkin or a conversation over beers, but as soon as it gets outside the head and into some tangible form this stage of development is called “conception.” What happens next, as more of the team becomes involved, is a series of visualizations of ideas. Artists conceptualize and build out locations, characters, and potential gameplay storyboards. Designers build simple prototypes in which they can begin experimenting with the most rudimentary of rules and controls. Writers can begin scrawling down highlevel story and characters that might exist in the as-yet nonexistent world. Basically, anyone who can make something themselves can go off and begin investigating directions inspired by that initial idea. It is here that the animator has perhaps the most powerful ability with which to make something from nothing—previsualization.

Ustwo Games’ Monument Valley was born of this initial concept sketch. (Courtesy of UsTwo Games.) 22

The Game Development With only a rudimentary knowledge of modeling and rigging (or temporarily using assets from a previous project), the animator should begin blocking out potential gameplay ideas with or without the help of input from design or other team members. Animating a camera to behave like a gameplay camera affords the animator the unique ability not only to make suggestions at this early stage but also show how it might look and play out as if it were under player control. Conception can be an exciting time for an animator, as ideas generated here can provide the basis for mechanics built over the following years, though expect to throw away many more than will be kept. It’s better to rapidly hash out a variety of “sketches” than hone in and polish any at this stage. Ultimately, this stage will generally finish with a pitch to whomever makes the decision on future projects, be it the studio management or a potential publisher funding the project. At this stage, an incredibly persuasive tool in the team’s arsenal will be a video mock-up of what the final game can look like, which naturally leans heavily on animation—only this time with a more finished look, illustrating things like setting, character, and, importantly, what the gameplay will be. Expect the creation of such a video to be tough, with input from all sides and many revisions, as it coalesces into the team’s vision. This “target footage” video and other accompanying pitch materials like presentation slides can be invaluable throughout the project as new team members are brought on board, as well as being a reminder of the initial vision to make sure the project direction stays on track. The industry relies less and less on design documents to maintain vision, as design is fluid and will change throughout the project, rendering anything static like a document obsolete very quickly. Conversely, a target footage video loosely provides a vision and serves more as inspiration for the potential years of work that will follow.

Target footage gives an idea of gameplay in this early Rainbow Six pitch. (Courtesy of Ubisoft.) 23

Game Anim: Video Game Animation Explained Phase 2: Pre-production If your team has successfully pitched and been given a “green-light” to move forward with the project, the next phase, pre-production, is incredibly important in readying the team to build the full game. Pre-production generally encompasses the prototyping of mechanics to support the core gameplay, tools and pipelines to allow the team to work as efficiently as possible, rendering technology to achieve the visual result, and related artistic benchmarks for the team to hit, including the animation style. Here, gameplay animators will be working with designers and animation programmers to get the player character up and running as quickly as possible so that prototyping within the actual game (rather than the independent tests) can take place, as well as basic movement through the game’s world so designers can begin building it out. At this stage, it’s likely that characters are in a high state of flux, so nothing here is expected to look finished beyond some key actions like idles and commonly seen walk and runs. If your game features combat, there will likely be a lot of experimentation with how silhouettes and cameras work to aid gameplay and readability of the characters onscreen, as well as a push to narrow down the stylization of the character movements. It is here that decisions will be made as to whether to adopt a keyframe or motion-capture pipeline depending on the visual look (and budget) of the animation direction. Technical animators will be heavily involved in setting up file structures and naming conventions, as well as initial character rigging to be improved throughout the project or as new requests come in. An ideal pipeline allows the team to be flexible, never having to stick with a rig issue or slow workflow because of a misjudgment made at this early stage, but technical decisions made here will have an impact throughout production. Cinematic animators will begin experimenting with the style of cinematography for any storytelling elements of the game and should be heavily involved in prototyping facial rigs. Any game with aspirations of storytelling with believable (not necessarily realistic) characters nowadays must focus heavily on getting the faces right, and that falls a lot on the facial rigging and ease of workflow for the animators. Too cumbersome and this already heavy workload will be impossible to create with any degree of quality, especially if the face has not been set up correctly for animation. At the end of the pre-production phase, the team should have a section of the game up and running, and, importantly, must have answers to key questions surrounding the game before moving onto the next phase. Ideally, a playable demo of core gameplay that shows that the game offers something fun and refreshing (called the core gameplay loop), with secondto-second gameplay, will be created. In addition, there should be a series of in-game visual benchmarks that give the art and animation teams the visual bar that they must achieve throughout the project. All these elements 24

The Game Development can be wrapped up in one demo, or as separate entities, but, importantly, should feature real working technology rather than fake “hacks,” as questions remaining at this stage of the project can turn into real risks further down the road.

Phase 3: Production Some animators prefer the idea-generation period, some prefer endlessly prototyping, but many prefer production most of all because they’re no longer doing throwaway work and are instead building the real bulk of the game that players will eventually see. Easily the longest stretch of the entire project, by now many of the high-level questions such as story, characters, core gameplay, scheduling, and technology should have at least been figured out so the team can get on with the business of creating all the assets that make up the game. Gameplay animators in this stage will mostly be working with finished character rigs to create the sometimes thousands of animations that make up their player and nonplayer characters. Cinematic animators will be hard at work assembling scenes, going through the various stages of previz to completion. If the project has opted for motion capture, shoots should be occurring as regularly as the budget allows, with animators fed enough mocap to keep them busy until the next shoot. A schedule will set milestones and dates for certain stages of the project. Games are not linear entertainment, and neither are their productions. The first aim of any project’s production phase should be to get the complete game playable from start to finish in any shape possible. This means inserting story scenes and events in as rough a form as possible. The player character(s), by now able to perform most actions in rudimentary form, will start shaping up as design hones in on the “feel” they are going for and new abilities are added on a weekly basis. Animators work with level designers to add relevant elements to the game as they come online, such as ladders to climb or chests to open up. In the fast-paced production phase, it is beneficial for animators to always deliver a quick animation to whomever needs it (design or programming, usually), so they can begin implementing knowing that a more finished animation will be coming later. Many animators complain about this temporary work being judged harshly on its rough look, but it is down to the entire team to educate themselves as to what style/quality of animation says “temporary,” such as previz or raw, unedited mocap. A confident team will know the animators deliver high-quality assets later when they are finished (with the target footage video from earlier helping to allay these fears). As the production phase can sometimes last for years, with a team growing throughout, one of the biggest challenges is to remain focused on what is important and relevant to the original vision. Here, deadlines and demos are a great way to have the team pull together for a common goal. Often the bane of mismanaged projects is when goals are inserted without proper 25

Game Anim: Video Game Animation Explained planning; having a target such as an external demo allows the whole team to coalesce on a shared deliverable where at least a portion of the game needs to look finished. Be it a demo for a publisher or to show to the public for marketing purposes, these slices of polish serve a similar purpose as the art benchmarks in the early stages as setting a standard for the rest of the game to aim for while forcing the team to iron out the last remaining pipeline and technology issues. Demos are tough on the animation team, as they need to polish up every action the player will be expected to perform during the demo’s playthrough, but this naturally reduces that work in the closing phases of the project. It is recommended to add unique details and “nice-to-have” features that give the game some extra personality at this stage, as there will be no time for it in the closing phase of mostly polish and bug-fixing. Similarly, don’t let all the animation bugs pile up throughout production, as for animators it is better to be polishing in the closing phases rather than solely squashing bugs. A game schedule is finite, but so large and flexible and containing so many moving parts that what actually goes into the game depends very much on what the team prioritizes throughout the production phase. You’ll never get to everything you want, so make sure you do get in what is important to you as an animator to really bring your characters to life. The ultimate goal of the production phase is to get the entire game to Alpha, whereby the game is content-complete. This means every character, level, animation, and action is accounted for—regardless of quality level. Only then can the project leadership review the entire game and know what editing is still required to finish the project—making the correct cuts or additions to finish the game out to the vision set at the start. Sometimes hard choices must be made here, but failure to do so will make the closing phase even more difficult.

Phase 4: Shipping Shipping, the closing phase of a project, is key to the quality of the final game players will get in their hands. Players will never bemoan a game that has fewer levels and characters or abilities than initially envisaged, mostly due to being unaware of the initial plans, but they will see when a game is lacking polish or, even worse, ships with a multitude of bugs. Here, it is the animation director’s (or lead’s) responsibility to decide what to focus on when polishing. As time is running out, not every animation is of equal importance, so it is imperative to focus on what is going to be “player-facing”—actions that are seen most often or are key to the experience. When deciding which cinematic cutscene to work on the most, oftentimes a “golden path” is referred to—the scenes the player must encounter to complete a game. Optional scenes or side content are generally less important, as they will be encountered less by the total player base. 26

The Game Development Here, the animation team will take the game from Alpha to Beta, where everything in the game reaches shippable quality. This means replacing all temporary assets with final polished ones and locking timing on cinematics so that audio and visual effects can go in and add their respective elements to bring the story and entire game to life. Ideally, nothing new will be coming in at this phase, but the reality of post-Alpha editing means that there will still be changes. For every change, the animation lead needs to evaluate and argue the cost (in time) of that change and whether it adds to the overall quality of the game (commensurate with the work involved) vs using that time elsewhere. The concept of “triage” comes into play in the final stages, as not all bugs or issues are given the same priority. It is important for the lead animator, along with other leads on the team, to focus on attacking the most serious bugs such as “progression stoppers” that prevent the player from continuing through the game first and work backward from there to issues that are purely aesthetic. As the project winds up, different teams will finish at different points, usually with the animation team essentially being “locked out” of making changes to the game without approval to prevent the very real risk of breaking something else due to the interconnectedness of game assets. Programmers will likely be the last ones to finish up as they solidify the release candidate (to the publisher or similar), but other teams such as animation must still remain on hand to spot-fix anything essential that comes up relating to their field. The absolute final stage of game development is sending the disk off to Gold Master for review by the platform holder (in the case of consoles) or uploading to the distribution channel (with a digital-only game). This time is perhaps the most focused of the entire project, with team members working intensely to avoid missing the ship date. Delays at this point can result in any accompanying marketing campaigns (usually prebought) being allocated elsewhere by the publisher or losing impact if the game ships too late.

Phase 5: Post-Release The period after Gold Master approval usually results in the work easing up, with exhausted team members taking well-earned breaks and returning to a more relaxed schedule. The bug-fixing may continue for patches and updates, and in this day and age it is regular to expect some kind of post-release downloadable content as a way of continuing revenue while maintaining the team momentum as the next project ideally ramps up. It is rare to fix animation bugs post-release, as the game is still highly fragile and any purely aesthetic change needs to be double- and triple-checked, so animators will likely be treating any DLC as a mini-project (very fun to do when you have a working game already) or starting the whole process again with conception and previz animations for the next project as the gamemaking process starts over. 27

Chapter 3

The 12 Animation Principles Back when video games were still in the Pac Man era, Disney animators Frank Thomas and Ollie Johnston introduced (in their 1981 book, The Illusion of Life: Disney Animation, Abbeville Press) what are now widely held to be the core tenets of all animation, the 12 basic principles of animation. They are: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12.

Squash and stretch Staging Anticipation Straight ahead and pose to pose Follow-through and overlapping action Slow-in and slow-out Arcs Secondary action Appeal Timing Exaggeration Solid drawing

29

Game Anim: Video Game Animation Explained While these fundamentals were written in the pre-computer graphics days of exclusively hand-drawn 2D animation, they translated perfectly to the later evolution to 3D animation, and while some of them less obviously correlate to the interactive medium some light reinterpretation reveals their timeless value. Understanding these basics of animation is essential, so it’s time to revisit them again through the lens of video game animation.

Principle 1: Squash and Stretch This is the technique of squashing or stretching elements of a character or object (such as a bouncing ball) to exaggerate movement in the related direction. For example, a character jumping up can be stretched vertically during the fast portion of the jump to accentuate the vertical, but can squash at the apex of the jump arc and again on impact with the ground. Ideally, the overall volume of the object should be preserved, so if a ball is stretched vertically it must correspondingly squash horizontally. Many video game engines do not support scaling of bones unless specifically required to due to the extra memory overhead (saving position, rotation, and scale is more expensive) and relative infrequency of cartoony games. However, this principle is important even when posing nondeformable rigs, as the theory of characters stretching and squashing their poses comes into play whenever performing fast actions even if not actually scaling, by extending limbs to accentuate stretched poses such as during jump takeoffs and landings.

Squash and stretch was a key visual motif of the Jak & Daxter series. (Courtesy of Sony Interactive Entertainment.) 30

The 12 Animation Principles

Principle 2: Staging Only directly relevant to linear portions of games such as cinematics, where the camera and/or characters are authored by the animator (as opposed to gameplay where both are player-controlled), staging is the principle of presenting “any idea so that it is completely and unmistakably clear” (The Illusion of Life: Disney Animation). This involves the use of camera, lighting, or character composition to focus the viewer’s attention on what is relevant to that scene while avoiding unnecessary detail and confusion. Staging is relevant in gameplay when relating to level design, however, where certain layouts will funnel the player and direct him or her down a corridor or over a hill to reveal story elements laid out there, or the use of lighting to direct the player’s attention. Here, the animator can work with design to best place scenes in the player’s view by using techniques like these without resorting to fully commandeering the camera or characters.

Funneled into the doorway, Gears of War players encounter this gruesome sight on kicking open the door. (Copyright Microsoft. All Rights Reserved. Used with permission from Microsoft Corporation.)

Principle 3: Anticipation Anticipation is used to prepare the viewer for an action, such as a crouch before a jump or an arm pulling back for a punch. It occurs in the natural world because a person jumping must first crouch with bended knees to give enough energy to lift off the ground, so it is used similarly in animation to sell the energy transfer of an action in a way the action alone cannot. 31

Game Anim: Video Game Animation Explained

The Monster Hunter series’ gameplay centers around timing the attack animation anticipation. (Courtesy of Capcom.) Anticipation is a controversial topic in video games, with designers often requesting as little as possible and animators pushing for as many frames as possible. Too little and the desired move, such as a punch or sword swing, will have little weight to it (a key component of player feedback, not just an aesthetic one). Too long and the move will feel unresponsive, removing agency from the player and reducing the feeling of directly controlling the avatar. Ultimately, it will depend on the goals of the project and the value placed on a more realistically weighted character, but there are many more techniques than just extra animation frames to help sell feedback that will be detailed later. Design-wise, anticipation in NPC actions or attacks (called telegraphing) is desirably longer, as it informs the player that they must block or dodge something incoming. There’s not much fun in having to guess what an enemy might do with little warning, so the ability to read their intention is essential in creating satisfying back-and-forth gameplay. Both player and NPC actions tend to follow a balance of longer anticipations for bigger effect (higher damage) and vice versa to promote a risk/reward in performing actions with long anticipation that might leave the player vulnerable.

Principle 4: Straight Ahead and Pose to Pose Referring purely to the process of animation creation, these two techniques describe the difference between working on frames contiguously (starting at frame 1, then onward) versus dropping in only key poses (called blocking) to quickly create a first pass and massage from there. Again, this has more relevance to linear animation (and especially 2D, where preservation of volume was key to the art of drawing) and essentially describes two philosophies. 32

The 12 Animation Principles

First blocking poses is the recommended method for gameplay animation. In CG animation, there is no need to work in the former, and the realities of production almost demand that animations be done in multiple passes of increasing quality, so pose to pose is the preferable method for most game animation. This is due mostly to the high likelihood of animations being changed or even cut as the design progresses. Key gameplay animations will continuously require iteration, and doing so with a roughly key-posed animation is much easier than with a completely finished one, not to mention the time it wastes to finish an animation only to see it unused. It is important never to be precious with one’s own work because of this, so keeping something in a pose-to-pose or unfinished state as long as possible not only promotes minimal waste, but allows the animator to create rough versions of more animations in the same amount of time—ultimately, many animations blending together creates a better and more fluid game character than a single beautifully animated animation. This all goes out the window when motion capture is employed, where the animator is essentially provided with the in-between motion as a starting point, then adds in key poses and re-times on top of the action from there. There is an entire breakdown of this process later in Chapter 11, “Our Project: Motion Capture.”

Principle 5: Follow-Through and Overlapping Action Overlapping action covers the notion that different parts of a character’s body will move at different rates. During a punch, the head and torso will 33

Game Anim: Video Game Animation Explained lead the action, with the bent arm dragging behind and the arm snapping forward just before the impact of the blow. A common mistake most junior animators make is to have all elements of a character start or arrive at the same time, which looks unnatural and draws the eye to clearly defined keyframes.

Street Fighter III’s Ryu displays excellent overlap and follow-through on his clothing following this Hadoken. (Courtesy of Capcom.) Follow-through, while related, instead describes what takes place after an action (the inverse of anticipation). This can cover actions such as a landing recovery from a jump or a heavy sword or axe embedding in the ground and being heavily lifted back over the character’s shoulder, and also includes the motion of secondary items such as cloth and hair catching up with the initial action. Follow-through is a great way to really sell the weight of an object or character, and holding strong poses in this phase of an action will really help the player read the action better than the earlier fast movements. Follow- through has fewer gameplay restrictions than anticipation, as the action has already taken place, though too long a follow-through before returning control to the player can again result in an unresponsive character. To maintain responsiveness, the animator should be able to control when the player is able to perform a follow-up action by specifying a frame where the player regains control before the end, allowing the follow-through to play out fully if no new input is given by the player, rather than having to cut the follow-through short in the animation itself. Game engines that do not 34

The 12 Animation Principles incorporate such a feature force animators to finish their actions earlier than desired to maintain responsiveness, losing out on a key tool in the game animator’s belt to provide both great-feeling and great-looking characters. Related to overlapping action and follow-through is the concept of “drag,” where looser objects and soft parts such as hair or weak limbs can drag behind the main mass of a character to help sell the relative weight of one object or body part to another. Follow-through, overlapping action, and drag on nonanimated objects such as cloth cloaks or fat bellies can be performed procedurally in the game engine by real-time rigging that causes these elements to move with physics. Adding rigging elements such as these, especially those that visibly change or enhance a character’s silhouette, are a fantastic way to increase the quality of a character’s animation with little extra work, not least because their motion will continue into the following animation the player performs.

Principle 6: Slow-In and Slow-Out This principle describes the visual result of acceleration and deceleration on moving elements whereby actions generally have slower movements at the start and end as the action begins and is completed, often due to the weight of the object or character’s body part.

Despite taking the same number of frames, the top sphere travels uniformly, while the bottom one moves slowly at the start and end, speeding up then down in the middle. This notion can be visualized very easily by a sphere traveling across a distance. Uniform/linear movement would see the sphere travel the same distance every frame, while slow ins and outs would see the positions gradually closer toward the start and end as the sphere’s speed is ramping up and down, respectively. Importantly, not everything requires a slow-in and out, but it is a good concept to grasp when they are required. For example, a rock falling off a beach cliff will have a slow-in as it starts at rest, then gains speed during the 35

Game Anim: Video Game Animation Explained fall, but will finish with an immediate stop as it embeds in the sand below. Were this to be animated, the rock would feature a slow-in and fast-out with a steep tangent at the end. Conversely, a cannonball fired high in the air from a cannon would display a fast in and a slower (yet still fast) out if its target were far away and it slowed due to air resistance. Objects that burst into full speed immediately can look weightless and unrealistic, so it is here again that there is conflict between the gameplay desire to give players immediate response vs the artistic desire to give weight to a character. A sword that swings immediately might look light, so it is the game animator’s task to add that weight at the end in the follow-through, giving the action a fast in but a slow-out as the character and sword return to idle.

A cannonball creates a fast/slow trajectory in different axes. In the cannonball example, the sense of weight could be displayed by animating a follow-through on the cannon itself as it kicks back, much like a game animator will often exaggerate the kickback of a pistol to exaggerate its relative power and damage as a weapon in gameplay, all the while maintaining the instant response and feedback of firing.

Principle 7: Arcs Most action naturally follow arcs as elements of the moving object or character move, such as arms and legs swinging while walking. Body parts that deviate from a natural curve will be picked up by the eye and can look unnatural, so arcs are a great way of honing in on the polish and correctness of an action. Much of the clean-up work of making motion capture work within a game is removing egregious breaks from arcs that naturally occur in human motion, but might look too noticeable and “wrong” when seen over and over in a video game. Contrary to this, though, animating every element of a character to follow clean arcs can look light or floaty when nothing catches the eye. Much like employing overlapping actions, like most general rules, knowing when to break a smooth arc will add a further level of detail to animation and make it that little bit more realistic. Due to its weight compared to the rest of the body, the head will often snap after the rest of the body comes to a stop following a punch. Having the head break from the arc of the rest of the 36

The 12 Animation Principles body is one of many observable traits that extensively working in mocap can reveal, adding the extra layer of dimension to animation required to look realistic.

Principle 8: Secondary Action Secondary actions are used to complement and emphasize the primary action of the character, adding extra detail and visual appeal to the base action. While it can be difficult to incorporate more than one action in many gameplay animations due to their brevity (secondary actions must support and not muddy the look of the primary action), it is these little details that can make a good animation great. Examples of secondary actions range from facial expressions to accompany combat or damage animations to tired responses that play atop long stretches of running. Technologies detailed later in Chapter 5, “What You Need To Know,” such as additive and partial animations allow actions to be combined on top of base actions to afford secondary motions that are longer than the individual animations required for player control.

Principle 9: Appeal Appeal should be the goal of every animator when bringing a character to life, but is ineffable enough to make it hard to describe. It is the difference between an animated face that can portray real emotion and one that looks terrifying and creepy. It is the sum of an animator’s skill in selling the force of a combat action versus a movement that comes across as weak. It is the believability in a character’s performance compared to one that appears robotic and unnatural. Appeal is the magic element that causes players to believe in the character they are interacting with regardless of where they lie on the stylized vs realistic spectrum, and is not to be confused with likeability or attractiveness, as even the player’s enemies must look aesthetically pleasing and show appeal. This is owed as much to character design as it is the animators’ manipulation of them, where proportions and color blocking are the first steps in a multistage creation process that passes through animation and eventual rendering to make a character as appealing as possible. Simplicity in visual design and posing on the animator’s part help the readability of a move, and clear silhouettes distinguish different characters from one another.

Principle 10: Timing Timing is the centerpiece of the “feel” of an animation and is generally invoked to convey the weight of a character or object. Intrinsically linked to speed, the time it takes for an object or individual limb to move or rotate a distance or angle will give the viewer an impression of how weighty or powerful that motion is. 37

Game Anim: Video Game Animation Explained

The slow movement of the colossi in Shadow of the Colossus is juxtaposed with the nimble climbing of the hero. (Shadow of the Colossus (PS3) Copyright 2005–2011 Sony Interactive Entertainment Inc.) In 3D animation, this is best explained using basic mathematics:

time=distance/speed, therefore, speed=distance/time This is why every curve editor available shows the axes of both distance and time as the main input for animators to visualize the speed of the manipulations they are performing. If we move an object 10 m over 2 seconds, that is faster than doing so over 5 seconds. Similarly, posing a character with arm recoiled then outstretched gives a faster punch over 2 frames than it does over 5. Referencing slow ins and outs, appropriate timing ensures a character or object obeys the laws of physics. The faster a move, the less weight, and vice versa, which refers back to the game animator’s dilemma of offering gameplay response while still maintaining weight. Having a character immediately ramp up to a full-speed run when the player pushes on the controller stick will make the character appear weightless without the correct visual feedback of leaning into the move and pushing off with a foot. Moreover, the timing of reactions gives individual movements time to breathe, such as holding a pose after a sword swing before the next, so the player sees it; or, during a cinematic moment, the delay before a character’s next movement can illustrate a thought process at work as they pause.

Principle 11: Exaggeration Real life never looks real enough. If you were to watch a real person perform an action, such as jumping from a height and landing on the floor, and copy 38

The 12 Animation Principles

Fantastic exaggeration of facial expressions in Cuphead. (Courtesy of Studio MDHR.) it exactly into your animation, the character would likely look slow and aesthetically imperfect. Real movements do not follow perfect arcs or create appealing or powerful silhouettes. In animation, we are looking to create the hyper-real, a better presentation of what exists in real life. In games, especially, we more often than not must create actions that look great from all angles, not just from the fixed camera perspective of traditional linear media. This is why one of the best tools in an animator’s kit is to exaggerate what already exists. When referencing actions, the animator must reinterpret the movements in a “hyper-real” way, with poses accentuated and held a little longer than in reality. Obeying the laws of physics completely, a bouncing ball causes a smooth parabola as it reaches the apex, then falls back to the ground with a constant gravity. An animator, however, may choose to hold longer at the apex (creating anticipation in the process) before zooming back to the ground, much like a martial-arts character thundering down with a flying kick. Similarly, a real MMA fighter might uppercut for an opponent-crumpling KO, whereas the game animator would launch the opponent off their feet and into the air to “sell” the action more and make it readable for the player. “Selling” actions via “plusing” them is an excellent technique to ensure a player understands what is happening, especially when the camera might be further away for gameplay (situational awareness) purposes. Care must be taken to ensure all levels of exaggeration are consistent throughout a project, and it will mostly fall on the animation lead or director to maintain this, as the level of exaggeration is a stylistic choice and inconsistency between actions (or across animators) will stand out and appear unappealing when the player plays through an entire game. 39

Game Anim: Video Game Animation Explained

Principle 12: Solid Drawings While at first seemingly less relevant in the age of 3D animation, one must remember that drawing is an essential method of conveying information between team members, and the use of thumbnails to explain a problem or find a solution is an almost daily occurrence when working with game design. All the best animators can draw to a degree that can easily support or give a direction, and the skill is especially useful in the early stages when working on character designs to illustrate the pros and cons of particular visual elements. Nevertheless, the “solid” part was essential in the age of 2D animation to retain the volume of characters as they moved and rotated on the page, so a lot of focus was placed on an animators’ skills in life drawing and the ability to visualize a character in 3D as they translate them to the 2D page. While no longer done on a page, an understanding of volume and 3D is still essential for an animator when animating a character in 3D to aid with posing and knowing the limits and workings of body mechanics.

The Devil May Cry series exemplifies strong posing for readability during the intense sword combat. (Courtesy of Capcom.) Solid drawings can be reinterpreted as a solid understanding of solid body mechanics, which covers everything from center of mass and balance to the chains of reaction down a limb or spine as a foot hits the floor. Understanding how bodies move is a core competency of a good game animator, and knowing how they should look from any angle means cheating is out of the question.

40

Chapter 4

The Five Fundamentals of Game Animation The 12 animation principles are a great foundation for any animator to understand, and failure to do so will result in missing some of the underlying fundamentals of animation—very visible in many a junior’s work. Ultimately, however, they were written with the concept of linear entertainment like TV and film in mind, and the move to 3D kept all of these elements intact due to the purely aesthetic change in the medium. Three-dimensional animated cartoons and visual effects are still part of a linear medium, so they will translate only to certain elements of video game animation—often only if the game is cartoony in style. As such, it’s time to propose an additional set of principles unique to game animation that don’t replace but instead complement the originals. These are what I have come to know as the core tenets of our new nonlinear entertainment medium, which, when taken into consideration, form the basis of video game characters that not only look good but feel good under player control—something the original 12 didn’t have to consider. Many elements

41

Game Anim: Video Game Animation Explained are essential in order to create great game animation, and they are grouped under five fundamental areas: 1. 2. 3. 4. 5.

Feel Fluidity Readability Context Elegance

Feel The single biggest element that separates video game animation from traditional linear animation is interactivity. The very act of the player controlling and modifying avatars, making second-to-second choices, ensures that the animator must relinquish complete authorship of the experience. As such, any uninterrupted animation that plays start to finish is a period of time the player is essentially locked out of the decision-making process, rendered impotent while waiting for the animation to complete (or reach the desired result, such as landing a punch). The time taken between a player’s input and the desired reaction can make the difference between creating the illusion that the player is embodying the avatar or becoming just a passive viewer on the sidelines. This is why cutscenes are the only element in video games that for years have consistently featured a “skip” option—because they most reflect traditional non-interactive media, which is antithetical to the medium.

Response Game animation must always consider the response time between player input and response as an intrinsic part of how the character or interaction will “feel” to the player. While generally the desire is to have the response be as quick as possible (fewer frames), that is dependent on the context of the action. For example, heavy/stronger actions are expected to be slower, and enemy attacks must be slow enough to be seen by the player to give enough time to respond. It will be the game animator’s challenge, often working in concert with a designer and/or programmer, to offer the correct level of response to provide the best “feel,” while also retaining a level of visual fidelity that satisfies all the intentions of the action and the character. It is important not to sacrifice the weight of the character or the force of an action for the desire to make everything as responsive as possible, so a careful balancing act and as many tricks as available must be employed. Ultimately, though, the best mantra is that “gameplay wins.” The most fluid and beautiful animation will always be cut or scaled back if it interferes too much with gameplay, so it is important for the game animator to have a player’s eye when creating response-critical animations, and, most importantly, play the game! 42

The Five Fundamentals of Game Animation Inertia and Momentum Inertia is a great way to not only provide a sense of feel to player characters, but also to make things fun. While some characters will be required to turn on a dime and immediately hit a run at full speed, driving a car around a track that could do the same would not only feel unrealistic but mean there would be no joy to be had in approaching a corner at the correct speed for the minimum lap time. The little moments when you are nudging an avatar because you understand their controls are where mastery of a game is to be found, and much of this is provided via inertia. Judging death-defying jumps in a platform game is most fun when the character must be controlled in an analog manner, whereby they take some time to reach full speed and continue slightly after the input is released. This is as much a design/programming challenge as it is animation, but the animator often controls the initial inertia boost and slowdown in stop/start animations.

Original sketches from Sonic the Hedgehog (circa 1990). The animation frames are already heavily displaying inertia. (Courtesy of SEGA of America.) Momentum is often conveyed by how long it takes a character to change from current to newly desired directions and headings. The general principle is that the faster a character is moving, the longer it takes to change direction via larger turn-circles at higher speeds or longer plant-and-turn animations in the case of turning 180°. Larger turn-circles can be made to feel better by immediately showing the intent of the avatar, such as having the character lean into the turn and/or look with his or her head, but ultimately we are again balancing within a very small window of time lest we render our characters unresponsive. A classic example is the difference between the early Mario and Sonic the Hedgehog series. Both classic Mario and Sonic’s run animations rely heavily on inertia and have similar long ramp-ups to full speed. While Mario immediately starts cartoonishly running at full speed as his legs spin on the ground to gain traction, Sonic slowly transitions from a walk to a run to a sprint. While Mario subjectively feels better, this is by design, as Sonic’s gameplay centers on high speeds and “flow,” so stopping or slowing down is punitive for not maintaining momentum. 43

Game Anim: Video Game Animation Explained Visual Feedback A key component of the “feel” of any action the player and avatar perform is the visual representation of that action. A simple punch can be made to feel stronger with a variety of techniques related to animation, beginning with the follow-through after the action. A long, lingering held pose will do wonders for telling the player they just performed a powerful action. The damage animation on the attacked enemy is a key factor in informing the player just how much damage has been suffered, with exaggeration being a key component here. In addition, employing extra tricks such as camera-shake will help further sell the impact of landing the punch or gunshot, not to mention visual effects of blood or flashes to further register the impact in the player’s mind. Many fighting games employ a technique named “hit-stop” that freezes the characters for a single frame whenever a hit is registered. This further breaks the flow of clean arcs in the animations and reinforces the frame on which the impact took place. As many moves are performed quickly, so as to be responsive, they might get lost on the player, especially during hectic actions. Attacking actions can be reinforced by additional effects that draw the arc of the punch, kick, or sword swipe on top of the character in a similar fashion to the smears and multiples of old. When a sword swipe takes only 2 frames to create its arc, the player benefits mostly from the arcing effect it leaves behind. Slower actions can be made to feel responsive simply by showing the player that at least part of their character is responding to their commands. A rider avatar on a horse can be seen to immediately turn the horse’s head with the reins even if the horse itself takes some time to respond and traces a wide circle as it turns. This visual feedback will feel entirely more responsive than slowly turning horse alone would following the exact same wide turn.

Animation’s impact can be further enhanced by visual effects. (Courtesy of Capcom.) 44

The Five Fundamentals of Game Animation

Much of the delay in visual feedback comes not from the animation alone, but the way different game engines handle inputs from the joypad in the player’s hands. Games like the Call of Duty series place an onus on having their characters and weapons instantly respond to the player’s inputs with minimal lag and high frame rates, whereas other game engines focus more on graphics, and post-processing will have noticeably longer delays (measured in milliseconds) between a jump button-press and the character even beginning the jump animation, for example. This issue is further exacerbated by modern HDTVs that have lag built-in and so often feature “Game Mode” settings to minimize the effect. All this said, it is still primarily an animator’s goal to make characters as responsive as possible within reason.

Fluidity Rather than long flowing animations, games are instead made of lots of shorter animations playing in sequence. As such, they are often stopping, starting, overlapping, and moving between them. It is a video game animator’s charge to be involved in how these animations flow together so as to maintain the same fluidity put into the individual animations themselves, and there are a variety of techniques to achieve this, with the ultimate goal being to reduce any unsightly movement that can take a player out of the experience by highlighting where one animation starts and another ends.

Blending and Transitions In classic 2D game sprites, an animation either played or it didn’t. This binary approach carried into 3D animation until developers realized that, due to characters essentially being animated by poses recorded as values, they could manipulate those values in a variety of ways. The first such improvement that arrived was the ability to blend animations across (essentially crossfading animations during a transitory stage) every frame, taking an increasing percentage of the next animation’s value and a decreasing percentage of the current as one animation ended and another began. While more calculation intensive, this opened up opportunities for increasing the fluidity between individual animations and removing unsightly pops between them. A basic example of this would be an idle and a run. Having the idle immediately cancel and the run immediately play on initial player input will cause the character to break into a run at full speed, but the character will pop as they start and stop due to the repeated nature of the player’s input. This action can be made more visually appealing by blending between the idle and run over several frames, causing the character to more gradually move between the different poses. Animators should have some degree of control over the length of blends between any two animations to make them 45

Game Anim: Video Game Animation Explained

Prince of Persia: Sands of Time was the first game to really focus on small transitions for fluidity. (Copyright 2003 Ubisoft Entertainment. Based on Prince of Persia®, created by Jordan Mechner. Prince of Persia is a trademark of Waterwheel Licensing LLC in the US and/or other countries used under license.) as visually appealing as possible, though always with an eye on the gameplay response of the action. The situation above can be improved further (albeit with more work) by creating brief bespoke animations between idle and run (starting) and back again (stopping), with blends between all of them. What if the player starts running in the opposite direction they are facing? An animator could create a transition for each direction that turned the character as they begin running in order to completely control the character’s weight-shift as they lean into the desired direction and push off with their feet. What if the character isn’t running but only walking? Again, the animator could also create multiple directional transitions for that speed. As you can see, the number of animations can quickly spiral in number, so a balance must be found among budget, team size, and the desired level of fluidity.

Seamless Cycles Even within a single animation, it is essential to maintain fluidity of motion, and this includes when a cycling animation stops and restarts. A large percentage of game animations cycle back on themselves, so it is important to again ensure the player cannot detect when this transition occurs. As such, care must be taken to maintain momentum through actions so the end of the animation perfectly matches the start. It is not simply enough to ensure the last frame of a cycle identically matches the first; the game animator must also preserve momentum on each body part to make the join invisible. This can be achieved by modifying the curves before and after the last frame to ensure they create clean arcs and continue in the same direction. For motion capture, where curves are 46

The Five Fundamentals of Game Animation mostly unworkable, there are techniques that can automatically provide a preservation of momentum as a cycle restarts that are described later in Chapter 11, “Our Project: Motion Capture.”

Care should also be taken to maintain momentum when creating an animation that transitions into a cycle, such as how the stopping animation should seamlessly match the idle. For maximum fluidity, the best approach in this case is to copy the approved idle animation and stopping transition into the same scene to manually match the curves leading into the idle, exporting only the stopping transition from that scene.

Settling This kind of approach should generally be employed whenever a pose must be hit at the end of an animation, time willing. It is rather unsightly to have a large movement like an attack animation end abruptly in the combat idle pose, especially with all of the character’s body parts arriving simultaneously. Offsetting individual elements such as the arms and root are key to a more visually pleasing settle. Notably, however, games often suffer from too quickly resuming the idle pose at the end of an animation in order to return control to the player to promote response, but this can be avoided by animating a long tail on the end of an animation and, importantly, allowing the player to exit out at a predetermined frame before the end if new input is provided. This ability to interrupt an animation before finishing allows the animator to use the desired number of frames required for a smooth and fluid settle back into the following animation.

Uncharted 4: A Thief’s End makes heavy use of “abort frames” to exit gameplay animations and cinematics before completion for fluidity. (Courtesy of Sony Interactive Entertainment). 47

Game Anim: Video Game Animation Explained Settling is generally achieved by first copying the desired end pose to the end of an animation but ensuring some elements like limbs (even divided into shoulder and forearms) arrive at their final position at different times, with earlier elements hitting, then overshooting, their goal, creating overlapping animation. Settling the character’s root (perhaps the single most important element, as it moves everything not planted) is best achieved by having it arrive at the final pose with different axes at different times. Perhaps it achieves its desired height (Y-axis) first as it is still moving left to right (X-axis), causing the root to hit, then bounce past, the final height and back again. Offsetting in the order of character root, head, and limbs lessens the harshness of a character fully assuming the end pose on a single frame— though care must be taken to not overdo overlap such that it results in limbs appearing weak and floppy.

Readability After interactivity, the next biggest differentiator between game and traditional animation, in 3D games at least, is that game animations will more often than not be viewed from all angles. This bears similarity to the traditional principle “staging,” but animators cannot cheat or animate to the camera, nor can they control the composition of a scene, so actions must be created to be appealing from all angles. What this means is, when working on an animation, it is not enough to simply get it right from a front or side view. Game animators must take care to always be rotating and approving their motion from all angles, much like a sculptor walking around a work.

Posing for Game Cameras To aid the appeal and readability of any given action, it is best to avoid keeping a movement all in one axis. For example, a combo of three punches should not only move the whole character forward as they attack, but also slightly to the left and right. Similarly, the poses the character ends in after every punch should avoid body parts aligning with any axes, such as arms and legs that appear to bend only when viewed from the side. Each pose must be dynamic, with lines of action drawn through the character that are not in line with any axes.

Lines of action are simplified lines that can be drawn through any single pose to clearly illustrate the overall motion for the viewer. Strong poses can be illustrated in this way with a single arcing or straight line, whereas weaker and badly thought-out poses will generally have less-discernible lines that meander and are not instantly readable to the viewer. Lines that contrast greatly between one pose and the next (contrasting actions) promote a more readable motion for the viewer than multiple similar or weak poses.

48

The Five Fundamentals of Game Animation For the motions themselves, swiping actions always read better than stabbing motions, as they cover an arc that will be seen by the player regardless of the camera angle. Even without the aid of a trail effect, a swipe passes through multiple axes (and therefore camera angles), so even if players are viewing from a less-than-ideal angle, they should still have an idea of what happened, especially if the character dramatically changes the line of action during poses throughout the action. All said, always consider the game being made. If the camera is fixed to the side, such as in a one-on-one fighting game, then actions should be created to be most readable from that angle. Similarly, if you are creating a run animation for a game mostly viewed from the rear, then ensure the cycle looks best from that angle before polishing for others.

League of Legends pushes animation incredibly far due to the far overhead cameras and frenetic onscreen action. (Courtesy of Riot Games.)

Silhouettes At the character design/concept stage, the animator should get involved in helping guide how a character might look, not just to avoid issues such as hard, armorlike clothing at key versatile joints such as shoulders or waists. The animator should also help guide the design so as to help provide the best silhouettes when posed. A character with an appealing silhouette makes the job of animating far easier when attempting to create appeal than one composed entirely of unimaginative blobs or shapeless tubes for limbs. It is advisable to request “proxy” versions of characters at early stages of development so they can be roughly animated and viewed in the context of the gameplay camera, which, due to wide fields of view (for spatial awareness gameplay purposes), often warps the extremities of character as they reach the screen’s edge. Generally, the most appealing characters look chunkier and thicker than they might in real life, due to them being warped and stretched once viewed from the game camera. 49

Game Anim: Video Game Animation Explained

Team Fortress 2 uses distinct character silhouettes for gameplay, making the animator’s job of bringing appeal much easier. (Used with permission from Valve Corp.)

Collision and Center of Mass/Balance As with all animation, consideration must be given to the center of mass (COM; or center of balance) of a character at any given frame, especially as multiple animations transition between one another so as to avoid unnatural movements when blending. The COM is generally found over the leg that is currently taking the full weight of the character’s root when in motion or between both feet if they are planted on the ground when static. Understanding this basic concept of balance will not only greatly aid posing but also avoid many instances of motions looking wrong to players without them knowing the exact issue. This is especially true when considering the character’s collision (location) in the game world. This is the single point where a character will pivot when rotated (while moving) and, more importantly, where the character will be considered to exist in the game at any given time. The game animator will always animate the character’s position in the world when animating away from the 3D scene origin, though not so if cycles are exported in place. Importantly, animations are always considered to be exported relative to this prescribed location, so characters should end in poses that match others (such as idles) relative to this position. This will be covered in full in the next chapter.

Context Whereas in linear animation, the context of any given action is defined by the scene in which it plays and what has happened in the story up to that point, and afterward the same is impossible in game animation. Oftentimes, the animator has no idea which action the player performed beforehand or the 50

The Five Fundamentals of Game Animation setting in which the character is currently performing the action. More often than not, the animation is to be used repeatedly throughout the game in a variety of settings, and even on a variety of different characters.

Distinction vs Homogeneity Due to the unknown setting of most game animations, the animator must look for opportunities to give character to the player and nonplayer characters whenever possible, and must also consider when they should avoid it. If, for example, the animator knows that a particular run cycle is only to be performed on that character being animated, then they can imbue it with as much personality as matches the character description. It’s even better if the animator can create a variety of run cycles for that character in different situations. Is the character strong and confident initially, but later suffers loss or failure and becomes despondent? Is the character chasing after someone or perhaps running away from a rolling boulder about to crush them? The level of distinction the animator should put into the animation depends on how much control they have over the context in which it will be seen.

The player character generally moves at a much higher fidelity and with more distinction than NPCs. (Copyright 2007–2017 Ubisoft Entertainment. All Rights Reserved. Assassin’s Creed, Ubisoft, and the Ubisoft logo are trademarks of Ubisoft Entertainment in the US and/or other countries.) If an animation is not designed for the player character but instead to be used on multiple nonplayer characters, then the level of distinction and notability should generally be dialed down so as to not stand out. Walks and runs must instead be created to look much more generic, unless the animation is shared by a group of NPCs only (all soldiers might run differently from all civilians). Almost always, the player character is unique among a game world’s inhabitants, so this should be reflected in the animations. 51

Game Anim: Video Game Animation Explained Repetition Similarly, within a cycling animation, if the action is expected to be repeated endlessly, such as an idle or run cycle, then care must be taken to avoid any individual step or arm swing standing out against the rest, lest it render the rhythm of repetition too apparent to the player—such as every fourth step having a noticeably larger bounce for example.

Uncharted: Drake’s Fortune utilized additive poses to avoid repetition when in cover. (Courtesy of Sony Interactive Entertainment.) Standout personality can instead be added to on–off actions or within cycles via “cycle breakers” such as the character shifting their footing after standing still too long, performing a slight stumble to break up a tired run, or even by modifying the underlying animation with additive actions—covered in more detail in the next chapter, “What You Need To Know.”

Onscreen Placement A key factor in setting the exaggeration of movement is the relative size on the screen of the character as defined by the camera distance and field-of-view. While cameras have gotten closer and closer as the fidelity of characters has risen, players still need to see a lot of the environment onscreen for awareness purposes, so many games may show characters that are quite small. Far cameras require actions to be much larger than life so as to be read by the player. The same is true of enemy actions that are far off in the distance, such as damage animations to tell the player they landed a shot. Conversely, only really close cameras such as those employed in cutscenes afford subtleties like facial expressions—here, overly theatrical gestures will generally look out of place. It is important as a game animator to be aware of the camera for any particular action you are animating and to animate accordingly within the style of the project. The wide field-of-view of the gameplay camera will even distort the character enough to affect the look of your animation, so, as ever, the best way to evaluate the final look of your animation is in the game. 52

The Five Fundamentals of Game Animation

Elegance Game animations rarely just play alone, instead requiring underlying systems within which they are triggered, allowing them to flow in and out of one another at the player’s input—often blending seamlessly, overlapping one another, and combining multiple actions at once to ensure the player is unaware of the individual animations affording the avatar motion. If not designing them outright, it is the game animator’s duty to work with others to bring these systems and characters to life, and the efficiency of any system can have a dramatic impact on the production and the team’s ability to make changes further down the line toward the end of a project. Just as a well-animated character displays efficiency of movement, a good, clean, and efficient system to play them can work wonders for the end result.

Simplicity of Design Industrial designer Dieter Rams, as the last of his ten principles of good design, stated that good design involves “as little design as possible,” concentrating only on the essential aspects. A good game animation system should similarly involve no more design than is required, as bloated systems can quickly become unworkable as the project scales to the oft-required hundreds or thousands of animations. Every unique aspect of character-based gameplay will require a system to play back animations, from the navigation around the world to combat to jumping and climbing to conversation and dialogue, and many more. Here, the game animator must aid in creating systems to play back all the varied animation required to bring each element of character control to life, and often the desire to create many animations will come into conflict with the realities of production such as project length and budget.

DOOM opted for full-body damage animations over body parts for visual control. (DOOM® Copyright 2016 id Software LLC, a ZeniMax Media company. All Rights Reserved.) 53

Game Anim: Video Game Animation Explained Thankfully, there are many tricks that a team can employ to maximize their animation potential, such as reuse and sharing, layering and combining animations to create multiple combinations, or ingenious blending solutions to increase the fluidity without having to account for absolutely every possible transition outcome. While the absolutely simplest solution is to do nothing more than play animations in sequence, this will rarely produce the best and most fluid visuals, so the smartest approach is to manipulate animations at runtime in the game engine to get the most out of the animations the team has the time to create. Again, we’ll cover some of the potential systemic solutions in the next chapter.

Bang for the Buck Just as we look to share animations, being smart about choices at the design stage should create a workable method of combining animations throughout production. This will in turn prevent unique solutions being required for every new system. For example, a well-thought-out system for opening doors in a game could be expanded to interacting with and opening crates if made efficiently. When building any one system, anticipating uses beyond the current requirements should always be considered. A good approach to system design will produce the maximum quality of motion for the minimum amount of overhead (work). It must be stressed that every new animation required not only involves the initial creation but later modification over multiple iterations, as well as debugging toward the end of the project. Every stage of development is multiplied by every asset created, so avoiding adding 20 new animations for each object type is not only cost-effective but allows more objects to be added to the game. (All that said, sometimes the solution to a system is just to brute-force create lots of animations if your budget allows it).

Sharing and Standardization As mentioned earlier, it is important to know when to keep animations generic and when to make unique ones for each example. If the game requires the player character to interact with many different objects in a game, then it would be wise to standardize the objects’ sizes so one animation accommodates all objects of a particular size. The same goes for world dimensions, where if a character can vault over objects throughout the game, then it makes sense to standardize the height of vaultable objects in the environment so the same animation will work anywhere—not least so the player can better read the level layout and know where the character can and cannot vault. That said, if your gameplay is primarily about picking up objects or vaulting over things, then it may be worth creating more unique animations to really highlight that area and spend less effort elsewhere. This, again, feeds back into the idea of bang for the buck and knowing what is important to your particular game. 54

The Five Fundamentals of Game Animation

Gears of War featured high and low cover heights supported by different sets of animations. (Copyright Microsoft. All Rights Reserved. Used with permission from Microsoft Corporation.) All these decisions must come into play when designing systems for your game, as very few teams can afford unique and bespoke animations for each and every situation. Nevertheless, beautiful game animation can come from even single-person teams that focus on one thing and do it very very well. This is the crux of what good design is, and every aspect of game development benefits from clever and elegant design, regardless of game type.

55

Chapter 5

What You Need to Know Basic Game Animation Concepts Every game animator should know about the basics of creating and getting a character into the game, opening up the first stages of video game animation creation, and enabling the sheer joy of controlling a character you helped bring to life.

Common Types of Game Animation To understand how game animations generally fit together, it is essential to understand how each animation is played on a character and for what purpose. There are essentially three primary types of animation required to make a character move fluidly, which are determined as much by how they start and end as by the action within.

Cycles Perhaps the most commonly used, cycles are named as such because the start and end velocities must match to create a seamlessly looping motion that can be played indefinitely until another animation interrupts. The most 57

Game Anim: Video Game Animation Explained common examples are idles (where a character stands relatively still doing nothing) or walks/runs (where the player gives a movement input on the controller, causing the character to move through the game world at speed). A good video game cycle smoothly loops back to its original starting movement such that the viewer is unaware of the exact start and end of the loop. This requires more than simply matching the pose at both ends—it requires the movement to flow in such a manner as to prevent any noticeable hitch in momentum on any body part. To achieve this, animation curves must contain no harsh steps, and curve tangents must match at the start and end. Similarly, it is imperative that actions do not all arrive at a noticeable end pose on the same frame, however smoothly, as this too will detract from the seamlessness of a loop. It is essential to build in overlap in as many body parts or elements like clothing as possible to mask the exported end frame. Last of all, care must be taken to avoid any one part of a cycle that stands out too much over the rest, causing the player to recognize the repetition in a cycle such as a single gesture being much larger than the rest. Ultimately, this falls under considerations around homogenization over distinction, the style of your project, and the particular character you’re animating.

Linear Actions A linear action is a one-off animation that has a distinct start and end, and usually goes to and from the “idle pose” with less requirement to smoothly play on repeat due to its one-off nature. Animations on joypad inputs such as jumps, punches, or sword-swings are the most commonly required and basic examples, essentially interrupting the character from their current state for the duration of the animation, and returning them to the prior state once completed. Emphasis should be placed on a clean transition to and from the cycle pose this animation is interrupting, often with a fast transition due to the desired responsiveness by the player via an instant state-change related to input. Importantly, the player has often committed to this move with no ability to interrupt or change the decision midway, so this must be considered when marking out the timing of any kind of linear move.

A sword-swipe attack animation has a definitive start and end. 58

What You Need to Know Transitions These types of animations are often superfluous but add to the fluidity of character movement by allowing artistic authorship of how the character moves from one action to another. Transitions are typically short and serve mostly to display proper weight-shifting between moves while keeping the feet planted when doing so. For example, the most basic games, on detecting a movement input from the player, will simply interrupt the idle cycle with the walk/run cycle by quickly blending across from one animation to another, whereas a better look can be achieved by creating a transition animation from the idle pose to the first frame of the walk/run cycle due to the animator’s desire to control how the character leans into the initial acceleration. Even higher fidelity can be achieved by creating a variety of directional transitions that cater to each direction that the character can start moving in relative to the current facing direction, with the animator able to control every facet of the turning and momentum shift during the transition. As you can start to see, the number of transitions required to give fluidity to a character can quickly explode, so only the biggest-budget games dedicate lots of resources to this area.

Skeletons, Rigs, and Exporting to Game Much like the traditional medium of stop-motion animation involving models with rigid skeletons inside them allowing for articulation, 3D character animation (and now a lot of 2D packages, too) primarily involves binding elements of a character to an underlying skeleton placed inside to allow articulation. In its most basic form, the rotational and positional values of these bones are exported, per frame, into the game engine and played back to allow the animation to be replayed on your character in-game. As such, 3D character animation is at its core a set of rotation and translation values on joints that in turn move vertices around in a 3D space to give the illusion of a living character. After animating long enough, you may one day have the revelation that all motion in the world around you can be described in these numbers—which opens up tremendous opportunities. Once this curtain is peeled back and you view game characters this way, there is much that can be done with these numbers beyond simple manipulation in a 3D package. • Crossfading the numbers when moving from one animation to the next will produce blending, creating smoother transitions between bespoke actions. • Adding the numbers together will produce additive animation, allowing you to play the same breathing animation on top of multiple other animations. This should already be familiar to anyone animating via DCC layers, as the concept is identical.

59

Game Anim: Video Game Animation Explained

Often only the export skeleton is read by the game engine. • Subtract these numbers and you’ll be able to remove poses from an animation, required for additive animations where you wish to combine multiple complex animations. • Export and/or play back only values for certain body parts and you’ll get partial animations such as upper-body or arms only. This allows your characters to carry, shoot, and reload a pistol while using the same running animation as if they are empty-handed. • Gradually vary the percentages of multiple animations playing down a joint chain to produce feather blends, allowing an animation to display farther down arms or up a spine, for example. While all of these calculations will be handled by the game engine, understanding what is being done with the values on and between your animations is key to unlocking the true potential of real-time video game animation. This allows resourceful game animators to find ways to produce more animations at a high quality by combining and reusing animations in smart ways. Regardless of the keys used in your animation scene file, the animation is exported into the game with every frame at your project’s framerate—most commonly 30 frames per second, though we’re more and more moving to 60. The game then linearly interpolates between these key values at 60

What You Need to Know runtime. While playing an animation back at full framerate offers best results, animations are often compressed to save memory (albeit less radically nowadays as memory budgets grow) by saving only every second frame or less depending on the compression algorithm’s rules. Compressing too much can cause the animation in-game to be visibly less fluid than the animation you created in the DCC, so caution should be taken to not over-compress. Despite 3D animation generally being created by animators manipulating rotations via X-, Y-, and Z-axes called Euler values (pronounced “Oiler” after their mathematician creator), animations are actually read by the game engine as “Quaternions.” The math involved requires four values, with every rotational value described instead by yaw, pitch, and roll relative to a plane, but this is too complex for an animator to work with, so keys are instead described via the human-readable Eulers. The dreaded downside of this translation is “Gimbal lock.” This occurs when an animated object such as a skeleton bone is rotated on one axis such that the remaining two axes match up and make them unworkable, highlighting the limitations of Eulers.

The animation rig sits on top of the skeleton. 61

Game Anim: Video Game Animation Explained An important aspect to understand when exporting skeleton values into the game is that animators typically don’t directly manipulate the skeletons but instead use a rig on top of the skeleton. This allows animators to take advantage of all the tricks and tools a DCC like Maya can offer to get the result they want, such as simple switches between local and global rotation, and work with complex tools like Inverse Kinematics (IK). These aspects are covered in detail in Chapter 8, “Our Project: Technical Animation,” but it is handy to know the limits of your particular game engine and how the skeleton animation values are read into your game. Armed with this knowledge, you can greatly modify any animation scene or character you’re working on to get the best results depending on the complexity.

How Spline Curves Work Back in the day of traditional 2D animation, an animator would draw the important key poses in the same manner as we might nowadays to initially sell the action’s movement. Once happy with the overall action, either the animator or (depending on the team size) one or many “in-betweeners” (an intermediary role under the animator) would draw the frames “in-between,” allowing the more senior animator to more quickly move on to the next shot.

The curve editor is how the animator modifies motion between keyframes. With the advent of 3D animation, the computer now calculates all frames between key poses, with the animator able to modify how the 3D character moves between them by way of inserting more keys and, crucially, modifying the curves to promote the classic principles of slow-in/slow-out (often called ease-in/ease-out when referring to curve shapes).

62

What You Need to Know As with everything in motion, speed is denoted by the distance over time. Regardless of the 3D software used, curves will be plotted on a graph with one axis in distance (often centimeters or meters) and the other in time (frames). An object keyed at 0 m on frame 0, then at 1 m at frame 30 (assuming 30 frames per second animation) will have traveled 1 m in those 30 frames. In this example, the curves are linear (straight), so we can say the object traveled at a speed of 1 m per second throughout.

A linear tangent. However, animators more often than not do not animate linearly, so a more likely scenario would be that the object traveled at an overall speed of 1 m per second, but was slower at the start and end and faster in the middle. This is because the spline handles around the keys are flat rather than linear, causing the curve to display a slow-in and slow-out. The steeper the curve, the faster the object travels, and vice versa.

An ease-in/out tangent. Not to be confused with the curve editor’s axes, this curve refers to the distance in one of the three axes (X, Y, and Z) in 3D space. Any object that moves in all 63

Game Anim: Video Game Animation Explained three axes will have three such curves described by different colors. (In Maya, X is red, Y is green, and Z is blue.) Manipulation of these three curves will describe the object’s motion in a manner that allows the object to be given more personality than simply moving linearly, with a variety of possibilities.

Curve variations. Fast-in/slow-out and vice-versa. Importantly, curves can be better bent to an animator’s whims by inserting additional keys in-between key poses. Traditionally, this was done by inserting whole new poses (called breakdowns) in-between key poses to aid the in-betweener in making decisions about the fast-/slow-in/outs, and the same can be done in 3D animation by inserting whole breakdown poses. Another advantage 3D animation has, though, is the ability to add keys only on certain parts of a character, so, for example, an animator can delay only an arm as the rest of the body winds up to throw a punch.

Whole body keys with limb breakdowns in-between. This has the benefit of making a curve editor less dense and more easily readable by the animator as well as allowing easy manipulation and retiming 64

What You Need to Know of individual components of a character. Generally, it is good practice, however, to key on a full character throughout the early blocking phase of an animation so as to easily retime an entire body pose/action, only going in to key more detail such as individual limbs, head, and so on once happy with the overall timing. Doing so allows the animator to easily apply offsets and overlapping actions, but it can make retiming messy, as many of the keys in the curve editor will now be offset and less easily read.

Individual body-part curves offset from the rest of the body. It is important to bear in mind that animation should not be done primarily by looking at the curves, and instead in the 3D view. Some animators focus on creating clean curves when rarely do these have as much personality as curves full of steep spikes and a mess of offset keys. As such, it is best to refer to the curve editor mostly when looking to debug or better understand why the in-betweens may be misbehaving. A speedy way of retiming an animation can be done via the timeline editor, which allows fast manipulation of the timing (not values) of keys only, without the need to work in the curve editor. In Maya, shiftmiddle-clicking allows you to select and then move a single or group of keys. In addition, hotkeys can be set up to allow for quick copy, cut, and paste functionality. Hotkeys are essential for fast workflows and can be tailored to an animator’s whims—set these up by following the DCC’s “help” functionality.

Maya timeline with keys easily retimed. 65

Game Anim: Video Game Animation Explained Collision Movement For the game to know where the character is expected to be in any given frame, the animator typically animates the character’s collision. This is an exported bone, either at the top of the skeleton’s hierarchy or altogether separate, that tells the game where the player is. This serves the purpose of updating the character’s collision (typically a cylindrical or pill-shaped volume), which envelops the character to facilitate the calculation of impacts from weapon and bullet hits, but most importantly preventing the character from running through walls or falling through the ground. While collision shapes can become more complex depending on the nature of the game (shooting games, e.g., require more precision for bullet hits and attacks to make it fair and readable, better hugging the animating limbs of the character, for example), a cylinder provides the best feel when traversing the environment, with its circular shape lessening the likelihood of getting stuck on edges in the environment.

The invisible collision cylinder moves with the character.

66

What You Need to Know Depending on the particular engine, the requirements for the exported animations may involve animating the collision forward with a character to provide forward movement in the game. This means the player’s forward trajectory (or any other direction depending on the animation) is read from the animation itself. Other engines instead export animations on the spot and are given their forward movement through the world from code alone. This approach makes it difficult for an animator to exactly match the position in the world, and causes foot-sliding (skating), but may be of higher importance for your particular project. Ultimately a hybrid provides the best results, using animated collision only when required for precise movement, most commonly when requiring precise contact with the game world.

Foot-sliding issues commonly occur when the movement of a character through the game world does not match the speed of the animation. This is more often than not the result of the design requirements for a character’s movement being sped up without the animation being updated, or to such an unrealistic speed that the animation will never match without completely being redone. Take heart, though, as while this is painfully obvious when viewing enemy characters from a static position, it is possible to greatly cheat the sliding of player characters that move through the world as long as the camera is moving too, because the player cannot determine the mismatch as the ground zooms past. To best avoid this issue, however, always try to prototype gameplay as much as possible before committing to final animations, and know that designers rarely slow characters down, instead often speeding them up for gameplay purposes.

While some games simply always set the collision at the base of the character by default, it is important for animators to have control over actions related to complex navigation such as climbing, vaulting, or using cover due to the nature of the character interacting with the environment. It is never advisable to turn off collision to solve an issue (it might turn back on again with the character inside an object), so the position of the collision must always be carefully managed when touching walls or cover objects, for example. Careful management of collision is also essential when transitioning between one animation and the next. It means nothing if your idle poses match when the collision is in a different position/orientation between animations, as this will cause the character to slide or rotate as the character blends from one animation to the next. This is because

67

Game Anim: Video Game Animation Explained

Collision can be animated independently of the character. the pose is always read by the game as relative to the collision, so care must be taken to ensure an animation that ends rotated 180° must also have the collision rotated 180° and in the inverse position on the relevant axes. Ultimately, incorrect animation of collision is the cause of a lot of animation pops, bugs, and imperfections, so a good understanding of how it behaves for your particular project and game engine is essential. At the very least, your animation team should have the ability to copy and paste poses relative to a character’s new position in the world. The most common tools required for a good game animation pipeline are detailed in Chapter 8, “Our Project: Technical Animation.” For third-person player animations, the camera will typically follow an offset from the collision to maintain the character onscreen, with the offset required to pivot around the body of the character. As such, it is important for the collision to generally have a smooth movement to avoid jerky camera movements, compared to the center of the character’s body that will have a more accentuated movement. Were a camera to be directly attached to a character’s body, the body part to which it is attached will always appear to remain static onscreen as it moves in lockstep with the camera. As an example, for a run cycle, the collision must move in a linear forward motion under the feet rather than matching the position of the center of 68

What You Need to Know mass or root of the character in every frame. To achieve this, ensure that the collision is placed relative to the character only at the start and end of a cycle, with linear movement in-between. If the run cycle involves multiple pairs of steps, match the collision under the character at the end of each pair rather than just at the extremes to avoid the character moving too far outside the collision when running if the run speed is not exactly constant.

The collision position keyed per step pair.

Forward vs Inverse Kinematics Three-dimensional animation, or specifically posing that when sequenced creates animation, is primarily done by moving and rotating bones for a desired result. Posing bones higher in a joint chain typically affects those lower down, such that the rotation of a shoulder will naturally affect the position of the elbow and wrist. This process describes forward kinematics and is desirable when animating an arm swinging while walking or legs when swimming or jumping—not when touching the ground. For the most part, feet are planted firmly on the ground and we expect them to stay in place unless the animator moves them, so we require a different approach. This is where IK come into play. When we have a desired end position/rotation for a bone like the heel, calculations are performed to rotate the hips and knees to maintain this result. The additional calculations required as compared to FK for this type of animation make this effect more expensive in-game, so while it rarely has an impact on the pre-exported animation in the DCC, it can prove expensive if desired inside the game engine, especially on multiple characters. (Animations are almost always exported as fully FK, regardless of whether IK is used in the DCC.) 69

Game Anim: Video Game Animation Explained

Forward vs Inverse Kinematics. As such, real-time IK is generally best only used when absolutely required or on important characters like the player only. It is primarily used in two instances: 1. As a real-time modifier to conform feet to uneven terrain in the game, adding to the realism of a character planted in the world, as the feet are matched to the ground despite animations being authored on a flat plane. 2. To maintain visual fidelity when blending between animations that would cause undesirable results with FK, such as when a driver’s hands need to remain in place on a steering wheel or similar attachment.

Intermediate Game Animation Concepts Once you understand the basics of how video game animation is created and played back in the engine, understanding these intermediary areas will expand your utility as a game animator by getting more out of the workflow by knowing about the established tools and workflow opportunities out there to increase quality and output.

State Machines Also referred to as finite-state machines or state graphs and similar to Blend Trees, state machines are a visual editor that describes and controls the various states (actions) animated characters can be in and how they transition between states, as well as the various blending that can occur within a single state, such as when aiming a weapon. Inputs such as joystick angle and button presses can be directly mapped to trigger or blend between animations, allowing animators to do much more than simply animate but also implement and adjust blending once they’ve exported their work into the game engine. A good game animator can make beautiful flowing 70

What You Need to Know animations, but a great game animator can find clever and novel ways for them to flow together inside the game engine when controlled by the player. Traditionally, when first designing or evaluating the requirements for a given character, the game animator would draw out on paper the various states the proposed character could be in (such as idle, walk, run, etc.) as nodes in a flowchart with lines connecting the states that could be moved between. This helped not only list all the animations that would be needed to get the full functionality of the character up and running, but also highlighted where transition animations might have been required to smooth over commonly used transitions between states. The animator or designer would then work with the programmer to implement the character, using the state chart as a reference.

Original Jak & Daxter animation state graph on paper. (Courtesy of Sony Interactive Entertainment.) As game character animation grew more complex, these state charts became incredibly complex spider webs containing hundreds of animations with difficult maintenance. Computer software to draw them is still employed, and from that came the most natural interface where a visual editor was required

71

Game Anim: Video Game Animation Explained

State machine in a game engine editor. to not only illustrate but also control the various states. Nowadays, finitestate machines are included with many game engines, so they should be learned and employed by game animators to allow them to get characters up and running with little to no programmer support.

Parametric Blending While the most basic implementation of animation on a character is to play one animation at a time and blend across to a new animation when required, many games these days play several animations at once. Blending between multiple animations is required to give the character a constant illusion of life and pull away from singularly authored actions, and instead running complex systems of animations on top of each other for fluidity. The primary reason for blending between multiple animations is to perform varying actions depending on parameters in the game. For example, an incline/decline angle detected on the ground beneath the character’s feet can be utilized to blend between flat, ascending, and descending walk/run cycles, making the character appear to use more or less effort on slopes. This helps give the impression the character is “aware” of the surrounding environments and reacts accordingly—further grounding the character inside the virtual world being created. Other common parameters that will influence blend values between multiple animations are things like speed, rotation angle, or jump height, and so on. 72

What You Need to Know

Parametric blending between synced walk and run animations depending on player speed. The faster a run, the tighter a turn, or the higher and longer a jump, the more the character will play a percentage “weight” of a number of animations created for each individual action.

Partial Animations In order to enable multiple actions playing on a character at once, one method is to break the character into multiple body-part groups such as left/right arm, upper/lower body, head and neck, and so on, and fully play unique animations on each. This is most commonly used to allow a character to move through the world via walk/run animations on the lower body while simultaneously playing combat animations like sword-swinging or shooting on the upper-body, but another common use for this technique is to cleverly maximize animation reuse. In this instance, one set of walk and run strafing animations can be used for a variety of different gun weapons if playing unique upper-body weapon animations on top for each weapon type. The biggest drawback of this approach is the visual nastiness with a noticeable disconnect between the upper and lower body, but this harsh effect can be lessened somewhat by applying a “feather blend” up the chain of bones (e.g., the arm or spine) between the two disconnected animations. This means an upper-body animation of holding a gun may receive more weight further up the spine, with the lower spine joints playing a larger amount of the movement driven by the walk or run.

Additive Layers A step up from blending between both full-body and partial body-part animations is the process of blending multiple animations on top of one another in any number of layers. Much like the ability to have layers in the 73

Game Anim: Video Game Animation Explained DCC, this approach requires different actions to be authored individually, then recombined at runtime on your in-game character. Often this approach requires some amount of guesswork, as the animator can only see the final result in-game, but as every animation should ultimately be evaluated and approved in the game this fits into the workflow. Importantly, for additive animations to work correctly, the second “additive” animation almost always requires a pose to be subtracted from it before it can be overlaid. This is because we only want the difference between the additive animation and a selected (usually idle) pose to be applied to the underlying animation (two full animations overlaid would cause the character to explode, as it doubles all the position and rotation values).

Original animation + Offset Pose = Modified Animation. The way additive animations work is to combine/add the position and rotation values of two animations together to create the desired result, therefore allowing the underlying action to remain visible underneath the added animation. This creates a much more fluid result than partial layers that essentially overwrite one another but comes with a degree of risk. If an underlying walk animation, for example, causes the forearm to swing a lot and we attempt to add an attack animation on top that also contains the forearm bending at some point, then it is possible the two combined animations will cause the forearm to bend past the limits of what are physically possible and break the arm.

Physics, Dynamics, and Ragdoll One benefit of animating characters in real time is that we can use the game engine’s processing power to improve animations further via physics motion, with next to no additional work on the animator’s part. Setting up elements on characters such as clothing or long flowing hair to move via physics at runtime can do wonders for animations—especially with regards to making them flow fluidly into one another as the physics motion straddles all blends 74

What You Need to Know and transitions, no matter how harsh. The more physics motion the better, with body parts that affect the character’s silhouette, like a long cloak, working the best to mask the jumps between animations from the player’s eye. Some of the first early implementations of full physics on game characters focused primarily on “ragdoll,” which was used for death reactions by switching characters from animation to fully physicalized motion to enable characters to flop down dead wherever they expired. In recent years, however, physics has become a much more integral part of motion. Like additive layers, games now combine physics on top of animated motion to show characters flinching from attacks or being buffeted around in moving vehicles, and deaths are now a combination of premade animations with physics layered on top to react to the environment—allowing a performance to play out in the character’s death throes.

Red Dead Redemption put emphasis on physics-driven deaths. (Courtesy of Rockstar Games). Physical simulations are also used to generate the destruction of objects and the environment that would otherwise be difficult and time-consuming to animate by hand in a convincing manner. They can be done at runtime (in the case of objects affected randomly by the player’s actions) or as prebaked animations that exist like any other animation, with the simulation being run in the DCC and exported as a regular animation. Destruction is a great way to make the game world feel much more reactive to the player and can add lots of drama and movement to an action scene. Physical simulations are often applied not to the full character but to certain movable clothing elements like cloaks and jackets, as well as softer areas of fat and muscle. This causes additional follow-through and overlapping motion to be created on the fly whenever a character transitions between animations or comes to rest, blurring the start and end of each animation. After initial setup on the rig and maintenance to ensure things do not break, this affords a higher visual quality with next to no additional work. The two approaches to this involve installing and calculating dynamic 75

Game Anim: Video Game Animation Explained elements either in the DCC or at runtime in the game—each with its own pros and cons: 1. At runtime, physics are calculated by the game, creating a calculation load on the game that might otherwise be spent elsewhere. The benefit of this approach is that settings can easily be tweaked across the board instantly, and physics causes motion to naturally flow between any actions performed by the player. 2. The DCC is where all simulations are run upon export of the animation into the game, meaning all secondary movement is prebaked into the animation file, adding to the memory footprint of each animation. Any tweaks to settings must be done in the DCC and all animation batches need to be re-exported, taking time. Without the benefit of runtime application of overlapping movement to aid with fluidity between actions, the main reason for this approach is to save on runtime calculations—especially important if applying to many characters such as in a crowd or aiming for less-powerful systems like mobile.

Batman’s cloak physics greatly aids transitions in the Arkham series. (Batman Arkham Knight image used courtesy of Warner Bros. Entertainment Inc. and DC Comics.)

Advanced Game Animation Concepts Advanced concepts are mostly technology-based opportunities usually reserved for animation teams with larger budgets to support the technology and tools required, but still useful to understand regardless because game development is always about finding workarounds and cheaper alternatives.

Procedural Motion and Systems In video game parlance, “procedural” animation means anything made by computer algorithms instead of an animator. This covers motion generated by an algorithm by feeding it data, such as behavioral rules as to how a

76

What You Need to Know

The Last Guardian employed procedural movement on Trico. (The Last Guardian. Copyright 2016 Sony Interactive Entertainment Inc.) character might move that will allow the algorithm to make decisions on the fly to generate the required motion. This approach requires minimal animation, so it can be a great time-saver in terms of asset creation, though it greatly adds to the complexity of a character’s setup and has yet to produce convincing results on realistic humanoids. Doing these calculations at runtime allows for characters to respond naturally to the player’s actions and the environment that playing back preset animations alone can never achieve. Regardless of the level of procedural influence used on a character, an animator’s eye is essential to guide the visual result toward something that is acceptable. Procedural systems are also beneficial in game development when generating numbers of assets that might otherwise be beyond the scope of the project budget or team size. Using algorithms, essentially sets of rules, to generate content can open up opportunities to new game experiences that rely on quantity—but almost always at the cost of quality when compared to motion hand-authored by an animator. Examples are systems to sequence animations together for natural gestures when speaking, or procedural lipsync based on the audio and text of the dialogue.

Full-Body IK While it can be used in the DCC when animating, the best use of full-body IK is to manipulate animations at runtime in order to bend pre-created animations to match contact points in the game world. Commonly used for climbing systems, full-body IK will not only move the hands and feet to correctly attach to the desired surface, such as a wall or ladder, but will also manipulate the position of the body via the root so as to avoid hyperextension of arms and legs when attempting to reach these targets. This solution is also used to ensure characters of differing sizes from those the animator used to author the animations can interact with one another effectively, such as during grappling or throwing moves in a fight, or even to 77

Game Anim: Video Game Animation Explained

The UFC series utilizes IK to emphasize character connections. (Courtesy of Electronic Arts.) adjust attacks between the player and enemies while on variable terrain such as slopes or while standing on objects of varying heights. Full-body IK can also be used to reach and interact with objects, such as accurately grabbing a door handle when the position of the player character is variable. Full-body IK is a post-process that requires a set of tweakable variables on the character setup or for each action, making it timeconsuming so is usually used sparingly.

Look-Ats Look-ats are an invaluable tool in the game animator’s book for adding the illusion of “awareness” to a character in the game world. Look-ats will utilize IK to rotate a bone or chain of bones, most commonly the spine, neck, and/ or head, to aim toward a target as if the character were looking at it. This rotation can completely override or be overlaid on top of animation on the underlying animation, with the latter preferable so as to not lose the animator’s work underneath when activated. Most look-at systems in games also require the placement of targets in the world for the character to look at, as well as potentially requiring a priority so the character appears to switch between them in a logical fashion. The equivalent of a look-at can be created inside the DCC by way of an aim constraint that contains certain parameters such as aim axis, up-vector, strength, and rotation limits, as well as blend times in and out—the same as will be required in a game system. For best results, when a look-at is activated, the head should lead and the body should follow down the chain, as is the case in real life when someone’s attention is caught.

Blend Shapes Primarily used for facial animation, while most facial animations are exported to the game as regular bone animation (or, more accurately, as pose values 78

What You Need to Know

Uncharted 4: A Thief’s End drives bone-based faces with blend shapes. (Courtesy of Sony Interactive Entertainment.) driving those bones), the best visual quality in video game facial animations is via blend shapes. This is the process of uniquely modeling the face mesh for each pose, sometimes in the hundreds, rather than simply repositioning the facial bones. Uniquely modeling the face allows for individual deformation to best recreate the shapes of highly detailed areas, such as the lips, fixing creases that unnaturally occur when simply moving underlying bones, or maintaining volume in the mouth when lips are pursed. Blend shapes also enable meshdeforming details such as muscles as the neck tightens and individually modeled wrinkles around the forehead and eyes rather than using shortcuts such as animated textures/shaders. Blend shapes are still expensive computationally and production-time wise, so they are an unlikely solution for games with many characters, but a middle ground can be found by using blend shapes in conjunction with bone poses to correct facial issues that arise with a bone-only solution.

Muscle Simulation While standard animation involves deforming a mesh around a skeleton’s pivot points, with weighting adjusting how much each joint affects every individual vertex on the model, muscle simulation creates a mass of “muscle” underneath the mesh that performs additional corrective deformations on top. While real muscle simulation is a rare occurrence in video games due to the additional computation required, there are solutions that approximate the 79

Game Anim: Video Game Animation Explained

Muscle deformation in Fight Night. (Courtesy of Electronic Arts.) same results. They include corrective blend-shape deformation of key body parts, additional helper bones that float atop the export skeleton to maintain mass at pinch points like the shoulders, elbows, wrists, pelvis, and knees, and physics bones that can be driven to approximate muscle contraction when moved by the bones around them.

Animated Textures/Shaders While primarily a tool for visual effects artists, it is useful to have an understanding of any game engine’s ability to animate textures on characters and other objects. One classic example is simple UV animation that moves a texture across a surface (where U and V represent horizontal and vertical coordinates across a texture). Others can include playing a flipbook (a number of frames of pre-created animation rendered out, then played back in 3D space or on a character). Normal maps, the technique used to provide extra detail on flat surfaces via textures that contain depth/height and directional “normal” information to give the illusion of a 3D surface, are now commonly used to create the impression of creases on faces and clothing. Rather than animating a high- detail rig that manipulates wrinkles and creases in the mesh itself, normal maps are simply faded in and out when required. Most often (in the case of facial wrinkles), blend values are automatically driven by the poses an animator manipulates for facial animation. For example, raising the eyebrows will fade in the forehead wrinkle automatically without the animator having to animate them separately.

Artificial Intelligence While a game animator will not be expected to create or implement AI behaviors, some knowledge of them helps complement the programmer(s) 80

What You Need to Know tasked with doing so. Working together, this collaboration will result in not only more believable movement through the world but also the visible decision-making process of when NPCs make choices. It should be noted that artificial intelligence is one of the hottest and most rapidly improving fields within all programming, so the below examples are tried and tested generalities rather than the latest cutting-edge methods, which are evolving daily.

Decision-Making The aspect of AI that “decides” when to carry out an action, what to do, where to move to, and how to get there. These decisions are usually in response to the player’s own decisions, such as taking cover when fired upon or, more commonly, moving to a better vantage point for engaging in combat scenarios. Typically, AI uses a combination of: 1. Points-based systems that assign values or weights to various possible outcomes to emulate priorities. 2. Rules that govern what situation will trigger an AI event, such as “when NPC health is below 50%, cause them to retreat.” Programming typically requires animation to create highly visible animations that illustrate decision changes, such as starting, stopping, or being surprised.

Pathfinding A common result of changes in decision-making is that NPCs will make their way toward a new location such as a cover object or the player. For pathfinding to work on all but the most basic of level layouts, NPCs must have a degree of awareness of their environment so as to find the optimum path while navigating obstacles. Paths are determined by priorities given to the AI, such as “engage or avoid player,” “fastest route,” “maintain momentum,” or “avoid turning 180 degrees.”

A level layout with marked-up AI path nodes and the navigation path (navmesh) in green. 81

Game Anim: Video Game Animation Explained Similar to the decision-making points/weights system, preset path points within a level (either hand placed or procedurally generated) will often have values assigned to them, with the NPC looking for the lowest value possible (most efficient path) to reach a goal. More than simply allowing running to the target, path priority calculations can include bespoke traversal actions such as points designated as appropriate to vault over a wall that will also be assigned a value and entered into the calculation. An important rule to remember is that good video game AI is not about making characters display the most realistic responses, but instead the most readable and enjoyable that fit within the gameplay systems. Clever decisionmaking is all for naught if invisible to the player, so it must be supported by readable animations that clearly show what an NPC is “thinking.” The job of AI is not to be smart, but to be entertaining.

82

(Courtesy of Activision.)

Interview: Mark Grigsby Animation Director—Call of Duty: Modern Warfare

Call of Duty has consistently been a powerhouse in video games, due in no small part to the fantastic “feel” of the weapons. Have you learned a specific recipe for making guns feel good from an animation perspective? Well, first and foremost, I love animation! As a child, I would study Disney films on my VHS, frame by frame, to understand how they were able to make things look so real, yet have their unique style with animation. I’ve always been a huge fan of action in films, cartoons, and artwork. That admiration for action has developed a certain style for my timing and energy when it comes to how I approach animation. If you love every bit of the art of observation, studying not only the principles of animation but the true cause and effect of what happens with motion, coupled with a strong drive to convey that through animation, you have the most important ingredient in the recipe. The other ingredients that are specific to delivering that special sauce are research on weapon handling, hands-on training, and studio-wide collaboration with the goal of ultimate player immersion!

83

Interview: Mark Grigsby Visibility of opponents is a crucial balancing act in a multiplayer FPS. What are some key third-person animation considerations you can share for such a tightly competitive multiplayer? That is the name of the game in a competitive FPS; you need to see the opponent you are facing. There are many factors in visibility: models, materials, lighting, and of course animations. When we approached the third-person animation in Call of Duty: Modern Warfare, our goal was to make a substantial increase in fidelity, through our locomotion assets and our state machine. As we moved through development and our fidelity increased, we had issues of players not able to see other players and being attacked from unseen places. This is an issue nicknamed “head-glitching.” This happens in FPS games when a player’s head position (their eyes) do not match the location of their third-person avatar’s head. On the animation side, we had to make adjustments to the assets to better align the first- and third-person eye/head positions, without destroying the essence of the animation. A few animations needed to be reworked, e.g., The “crouching while moving left” asset. The original asset had the rear of the player heading the direction of the motion. This caused the player’s body to be seen around a corner, before the player’s eyes. We then returned to our motion-capture stage and reshot the move, adjusting the head to be heading in the direction of the motion and reducing the amount of body exposed before the player’s eyes/head was able to see or be seen. While this greatly improved the experience, this is an issue that FPS shooters deal with, and like all development we continue to one-up the past iteration, and will continue to improve the player experience. Though “head-glitching” solves are a balance of tech and assets to aid in the player’s visibility, we also worked on purely animation visibility cues within our locomotion. As you are playing against other players, you want as much information about your opponent from a glance. The weapon they are wielding is a very good indicator of how you will engage your opponent. Many of our players sprint everywhere as they navigate through the playspace. We give each weapon class a distinct sprint animation. That means when you have a pistol, your third-person avatar will play a specific pistol animation while sprinting. This gives information to the opponent as to what weapon you are holding and may give them the strategy to engage or withdraw. Another key element, in Warzone, was our “downed” mechanic. This is a state in the game where if your health is depleted, and you are put into a wounded state; if there are players alive on your team they can revive you. As a player needing to know how to engage or withdraw from opponents, we needed the attacking player to know, not only the state the “downed” player is in, but also the opponent’s teammate when they are in a reviving state. We knew that was a very tense moment within the game. You’ve downed an enemy, but you know they have a teammate alive … somewhere. Do you rush and finish off the downed opponent as they scoot away to their teammate? Or do you wait until you see the teammate reviving? It’s a great sense of tension for both sides. For the team with the downed soldier, you have to choose to save or not save your teammate. While you are reviving, you are placed in a timed reviving animation, and are only allowed to look around. That alone, is stressful because you know the enemy is coming. To aid in displaying your terror, we made the third-person avatar kneel over the downed soldier, but we made sure the avatar is looking around in a semi-frantic way to convey to the opponent that you are in that revive state, but also to reflect that tension of that core loop. It’s so much fun! Authenticity is essential for the Call of Duty series, especially in terms of weapons, but where do you draw the line between realism and exaggeration to really sell impact? We definitely do our due diligence and homework when it comes to how we showcase the military, tactics, and equipment in our games. Our fans are some of the best in the business, but will let us know if things aren’t quite accurate. Their passion for this is just as strong as ours. Authenticity is an absolute must, but as I mentioned earlier I grew up loving action films, so there is always a place for “flair”! The trick is to use it in the right places and with the right tone. 84

Interview: Mark Grigsby An example of this balance is in our Viewmodel Animations (first-person weapon animations). The authenticity of how a weapon is handled and manipulated is researched and consulted during the development of animations assets. After the Animator understands the ins and outs of how the weapon operates, we look for areas to give a punch of “flair” that will bring in some of those dopamine pleasure points for the player: The pleasure of feeling like a “badass” while playing the game, giving hints or fullblown Action Hero personalities to the asset. Of course, the project dictates where the authenticity or flair is needed, so the animators abide! Always ready! Can you share some of the most complex weapon reloads in terms of state machines? With our new feature Gunsmith, that we added to Call of Duty: Modern Warfare, our weapons animations expanded exponentially. Not only did our reload fidelity need to improve for this project, but we introduced, at times, three different Ammo Capacities or Calibers to any given weapon. This required more time for asset creation, as these Ammo changes are represented in the models. If a weapon had a Drum Magazine attachment, then we’d make Drum Magazine animations for Tactical, Empty, ADS Tactical, ADS Empty, Fast Tactical, Fast Empty, Fast ADS Tactical, and Fast ADS Empty Reloads. Sure, we could have just sped up anims and just ignored some of these details, but we wanted to deliver the most immersive player experience, and I think we achieved that. Like all achievements, though, we’re always looking for ways to improve and tweak further, and we will! The single-player experience features some of the most tightly scripted and memorable one-off action sequences in gaming. Can you talk about tricks you’ve learned in terms of driving the action forward during animated action sequences? Hats off to the entire team for their dedication, creativity, and collaboration creating these moments. Infinity Ward believed in the product and ultimately the product shined for it. There were many learnings during development. One was the way we would approach moments or scenes on the mocap stage. Building these scenes requires understanding from all parties involved, from the idea concept to shoot day on the mocap stage. Once we were on stage, we would give the actors as much information about the scene before the shoot and during to aid in the performance. Visual aids, audio aids, special suits (Juggernaut movement), and anything else to give the performer the complete picture of what we were after. And like many other areas of development we are still developing new techniques to improve our process. Similarly, any technical insights on syncing NPCs to the environment and the player’s actions? Like Viewmodel Animations, AI is a component of the game that spans across multiple disciplines: animation, design, and code. For Call of Duty: Modern Warfare, like our third-person player animations, we invested in resources to bring our AI to a new level of fidelity. Introducing new locomotion blendspaces for NPCs aided in pushing that immersion I was speaking of earlier. We wanted the characters to feel like they had intent and thought, as they navigated through the playspace. We also added procedural head tracking giving the NPC a level of perceived awareness. We invested in our ragdoll technology to give the player a better effect on target experience when laying down the opposition! We pushed the strict guidelines of that system, adding more natural timing and performance to animation assets by selecting the correct performer for the right mocap shoot. This would include stunt people, military consultants, and actors, all aiding in bringing every NPCs to life as we strived to keep the player immersed. Lastly, do you have any fun anecdotes about iconic animations or mocap shoots from the series? Long, long ago on Call of Duty 4: Modern Warfare, there was an animation lead that said, “As a child, I would study Disney films on my VHS, frame by frame, to understand how they were able to make things look so 85

Interview: Mark Grigsby real, yet have their unique style with animation.” Well, he was given a task to bring Viewmodel Animations to life. Oh boy! I loved Soldier of Fortune II: Double Helix’s weapon feel and wanted to bring that to Call of Duty. Though I loved the feel I wanted to slow down the timing and really appreciate the animation. Well, the designers were not having that, and many times requested we speed the animations up for gameplay. Me, not understanding why they didn’t want to watch our beautiful, tastefully long animations, would adjust sometimes. As clever as they were and accommodating to my plight, they created a perk to speed up the animations. When the perk wasn’t equipped, the animations stayed tastefully long and realistic! That perk’s name is Sleight of Hand. The End.

86

Chapter 6

The Game Animation Workflow Being a great game animator means more than just creating beautiful animations. When each individual animation is only one of potentially hundreds of assets required to bring your character to life, being productive and efficient with your time is also an invaluable skill to cultivate. While every animator must discover their own unique techniques in bringing ideas to life, below are some invaluable practices to start with so as to ensure you maximize your output while minimizing wasted effort.

Reference Gathering For starters, more often than not, the secret behind an animation that stands out as exceptional is the animator’s use of reference, regardless of the animator’s skill or experience. A good video reference can bring in subtleties and detail that may be missed when creating purely from imagination and can make the difference between a good animation and a great one. Sometimes wrongly confused with cheating or copying by juniors, most experienced animators spend time sourcing a video reference or shooting their own before tackling any significant or demanding animation task. If the 87

Game Anim: Video Game Animation Explained very best animators out there rely on obtaining a good reference, then so should you. No video can compare to real-life observation, but often it’s the only way we can observe motions that are not easily found or recreated within a studio environment. When real-life reference is not possible because of time or budget constraints, a good Google search will suffice. Most videos can be found via a quick Google or YouTube search, and another great resource for quality videos is to be found at gettyimages​.co​m. If you can recreate the action yourself, you will greatly benefit from recording either yourself or a friend/colleague performing the action, not least because you can control camera angles and recreate the exact action you wish to animate. The ubiquity of phone cameras means virtually everyone has the power to easily obtain a self-shot reference, and nowadays there is no need to cable-transfer to your work computer when simply emailing the file(s) will suffice. When you and/or a teammate cannot re-enact the motions, or something more exotic like animal reference is required, a field trip to the zoo or a gym equivalent is next up on the list. Unfortunately, game studios can sometimes be reticent to spend money or devote resources (your time) to send team members out to gather reference. As such, the idea can be better sold by combining animation, audio, and other disciplinary trips so as to maximize their value to the team, with a smaller group recording the material to be shared among the whole team on their return. In a case where you’re required to animate a fantasy creature that does not exist, look for the closest real-life counterpart (or combination of animals) that most closely resembles the creature. For example, an alien tentacled creature would benefit greatly from reference of a squid, cuttlefish, or octopus. The same can be done for human characters that require elements of animalistic behavior. A monster chasing the player may have some qualities of a raging bull, such as the blind determination to catch and attack

A reference-gathering field trip to the zoo. 88

The Game Animation Workflow the target, from which bull reference can provide timing and attitude, if not the exact motions. While exaggerating movements and timing in your self-shoots can aid with reference gathering for stylistic purposes, be careful when shooting your own reference that you don’t overact movements. Too often, animators will act out the motion as they see the final animation in their head, when this kind of stylization should only be interpreted when it comes time to animate—essentially negating the purpose of using reference in the first place.

Once you have the video you require, you need to view it in a format that is easy to scrub (move forward and backward) to compare to your animation by viewing it alongside. This depends on the player you view it in. It is possible to embed video reference into the background of the DCC to facilitate oneto-one timing and posing, but that really takes the fun out of animating, and, just like mocap, timing should be the first thing to be enhanced compared to a real-life reference, with posing the second. As such, it’s recommended to keep your reference separate in a viewer. Recommended free video players are QuickTime and VLC, while paid ones are RV and Keyframe MP.

Reference video can be embedded in the DCC’s background.

Don’t Be Precious One of the worst feelings you can experience as a game animator is having an animation or system you poured your heart into cut from the game after days of hard work. While there are few certainties in the dynamic and nonlinear whirlwind of game development, one thing you can count on is that this will happen. You can either fight it, try to avoid it, or just accept the inevitable. 89

Game Anim: Video Game Animation Explained All are valid paths, but one way of avoiding heartache is never to have loved at all, though this is impossible to convince an artist of entirely. While it will always hurt to lose work you cared about, something that helps is to know that each cut asset and every discarded prototype was ultimately useful in helping the team arrive at the final game. Whether it be a better animation, a more efficient system, or a more focused game in the end, no work is truly wasted when it was essential to explore in the first place only to arrive at a different but better final result. To this end, never be overly precious with an individual animation or gameplay mechanic. Something that may be great standing on its own might just not fit in the grand scheme of the game (and every great idea can be saved for the sequel). The following steps should help minimize the impact of cut work.

Animate Pose to Pose Over Straight Ahead A key traditional principle—there are generally two main approaches to animation creation. Straight-ahead animation is done in a linear manner, with the animator progressing chronologically over the animation from start to finish, whereas pose to pose takes the approach of initially “blocking in” a rough version of the entire animation, then modifying and working from there. The former, mostly a leftover from traditional 2D animation, is said to be generally freer in terms of animators beginning unaware of where the action might end up as they discover the motion themselves during the creation process. The latter is much more controlled right from the start, with animators able to edit posing and timing from the start as they mold the action into the final result. More often than not, pose to pose is the preferred method for game animation, as we generally know our start and end poses already, with the ability to modify and tweak posing and timing relatively easy in 3D, as well as being incredibly beneficial to the proceeding workflow described below.

Rough It In Using a pose-to-pose approach allows rough “proxy” versions of any required animation to be created very quickly, with minimal commitment from the animator. Long before focusing on the kinds of details that really bring an animation to life, the animator should aim to create a quick “blocking” pass of the animation that can then be used by other team members who may be waiting on something to get their own work started. Keeping an animation rough like this for as long as possible means there is minimal wasted work should the requirements change or the action itself be cut, and in the event that it is still used it makes it far easier to modify the timing (important for response) and posing (making the action readable) 90

The Game Animation Workflow

Lab Zero’s Skullgirls previews 2D animations in-game before balancing and polish. (Courtesy of Autumn Games.) when there are minimal keys involved. Maintaining a minimal amount of keys on a rough animation (avoiding visual niceties like overlap and offsets) will allow individual frames or complete sections to be more easily reposed and retimed.

There can be instances when you will be judged on the quality of quick and dirty temporary animations by other team disciplines unfamiliar with the process, so it is important for the animation team to clearly communicate with and educate other disciplines as to this aspect of the workflow should they have concerns. The penalty of holding back work until an “acceptable” quality is reached so as to not raise concerns is simply not worth the cost vs benefit of working rapidly with a fast and loose approach.

Get It In-Game! It cannot be stressed enough that by far the best way to evaluate an animation, even in its early stages, is to review it the game under the correct camera field-of-view (FOV) on the actual characters at 30–60 fps (seeing it in context). The most desirable workflow, therefore, is to get the animation hooked up and playable ASAP (sometimes worth pointing the game to an empty or T-posed scene even before creation), so as to make iteration of the action as fast as possible. Often the animator will be ahead of the programmers required to get an actual gameplay system up and running due to their many commitments, so it is desirable to create test levels to review animations that aren’t fully viewable in their correct gameplay situation. Even better, you can temporarily replace an animation that already exists if required to see it on the player 91

Game Anim: Video Game Animation Explained character—just be careful not to submit your changes to the game lest you break a system someone else is working on or requires for their own work.

Iteration Is the Key to Quality If there is one single aspect of the game animation workflow that fastest drives your work toward quality, it is reducing the time it takes to iterate. You will never get an animation working in one shot, as it’s impossible to judge how it will look and play until it’s fully in context with all the related gameplay elements working in concert. Knowing that it is important to iterate on the animation once in-game should help you zero in on the final result faster.

Animating in real time within Unreal Engine 4. (Copyright 2018 Epic Games, Inc.) With an animation hooked up in-game, the amount of time it takes to see changes on that animation should be reduced as much as possible, with the ultimate goal being a 1:1 real-time update. Ultimately, this is often beyond the animator’s control but should nonetheless be something to strive for with the engine programmers if improvements can be made in this area.

Blocking From Inside to Out The root of your character affects every other element, so it makes sense as the place to start your rough blocking pass. It can give you a good idea for timing before you’ve even done your posing. Once happy with the “rhythm” of your action from start to finish, then begin to assemble the relevant poses around this action with the head and limbs (in the case of a humanoid), modifying the timing and adding more poses as required. When you begin to see a human as only a few elements (a root, head, and four limbs), the process of blocking in a sequence becomes much less 92

The Game Animation Workflow

Begin by animating the main body mass only for timing. daunting. Only once these aspects have been posed out should you then finish the blocking pass with smaller details like hand poses, clavicles, and improved spine and pelvis orientations. Taking this approach affords the animator the fastest way to block in with more visibility on the entire animation with each new quality pass.

Pose-Sharing Libraries No animator has time to repeatedly make the same facial or hand poses over and over again, so a library of pre-created poses is an essential tool in any workflow for rapid creation of assets. Access to the poses should be centralized on a network so that every animator can share the same assets, allowing for pose standardization and stylistic consistency across the team. Because many game animations come to and from poses such as the idle (or several different idles), a way of quickly copying and pasting oft-used full-body poses like these across many individual animation scenes is essential.

93

Game Anim: Video Game Animation Explained

A hand-pose library can be an invaluable time-saver.

Keep Your Options Open Anticipating the potential changes to an animation is a skill that will come with time as you work with designers more and more, so why not make it as easy as possible to make the quick modifications that you expect will come? A good way to facilitate this is to animate certain potentially variable elements (such as the overall arc height of a jump) on a second layer so that the entire animation can be changed with only one key. Just remember to commit at one point, or your animations can quickly become difficult to parse, especially for any other poor animator who must make edits later as they struggle to decipher your scene setup. 94

The Game Animation Workflow

Game animators on teams should not expect to be the sole owner of any aspect of development, and that too includes individual animation scenes, as finishing a game often requires an all-hands-on-deck approach. It is essential to maintain clean animation scenes whenever possible in the likely event they will be opened by multiple animators. Additional animation layers should be correctly named (and minimal at best). Use of constraints to drive aspects of motion should be sparse, but, when used, should ideally be visually obvious and correctly named. Version-control submission comments work best when clearly stating the major revisions in each check-in, written in a manner that can be understood by someone new to your scene. Essentially, anyone left to pick up after you should find a clean house so they can get straight to work.

One bad habit is retaining animation layers for the purpose of allowing you to undo changes, when the cleanest and best way to leave yourself options is to go back to previous versions of a file via the project’s versioncontrol software. It is imperative to write submission comments that reflect any changes that you may wish to revert back to at a later stage. Submit versions several times a day (on top of local automated backups enabled from your DCC preferences) so as to prevent work loss in the instance of a crash or power outage. Submitting before a major change is highly recommended.

Use Prefab Scenes Throughout the project, the game animator will likely be creating hundreds of animations, so any opportunity to automate repetitive tasks, however small, can be an invaluable time-saver. One such task is setting up your scene to be ready to animate. Rather than starting every new animation from an empty DCC scene, take your scene as far as you can, then save it as a prefab from which you will start all others. For example: • If you’re primarily working on one character, reference them into the scene. • Set visual layers to display only the ones you prefer to work with for that character. • Add any additional animation layers you often end up adding. • Add cameras and/or other cinematic elements required. • Add any additional elements required for the export process. • Save this scene in an easily accessible folder. Going through this process will enable you to get started animating as quickly as possible for each new animation. 95

Game Anim: Video Game Animation Explained

Start new animations from a prefab scene set up to your liking.

Window layouts you commonly use are another version of setting up your prefab that can save lots of time over a project’s life cycle. If you have a different setup for animating a single-character gameplay animation than you do a cutscene with cameras, be sure to save them, clearly named, in the DCC so you can easily switch depending on the task at hand.

Avoiding Data Loss One of the worst fears of any animator is losing work due to a crashing DCC or, worse yet, accidental file deletion. Thankfully, there are a variety of best practices to adhere to that can minimize the potential risk of data loss without slowing down a workflow, listed below in order of risk/reward should you be unlucky enough to lose or choose to revert work.

Set Undo Queue to Max Experimenting with a new pose or series of poses only to find it doesn’t work is one of the great advantages of 3D animation over the traditional hand-drawn approach; however, the more you go down one path, the more you run the risk of being unable to fully undo the experiment, frustratingly requiring a full reload of the file. Take the guesswork out of the undo limitation by setting it to the maximum possible.

Configure Auto-Save A one-time setup when first opening your DCC, ensure that auto-saving and auto-backups are set to regular intervals. While auto-saving, you’ll generally be locked out of your work, which can prove frustrating, especially when working on heavy scenes that take some time to save. As a rule of thumb, set the auto-save interval to the maximum amount of work that you are 96

The Game Animation Workflow prepared to redo in the unfortunate circumstance you lose your work at that point.

Save Often Why only rely on auto-backups? Make sure to save your file often, especially before you are about to perform a difficult task or one that you may wish to revert. Remember that saving overwrites your previous save, so you must be confident you are committing to all your most recent decisions since the last version submission.

Incremental local saves (when the DCC automatically saves a new file with an incrementally increasing numerical suffix such as 01, 02, 03, and so on …) are generally less desirable than relying on version control, primarily because there is no opportunity to comment on what changes were made for the benefit of other users (or the animators themselves, especially when the time has passed before returning to them). In addition, only those with access to your machine can return to previous versions—especially risky should the hard drive fail.

Version Control The most robust form of saving your work is when a snapshot of your current scene is saved and submitted to a database that is then duplicated (or physically stored) off-site for maximum safety in the event of the studio burning down or similar. This is the choice of most studios that cannot risk work and time being lost, as well as protecting against disastrous occurrences like fire and theft. The biggest advantage of version control is the decentralized access to all versions of a file, allowing other team members access to submitted files (as well as making files inaccessible while you are working to prevent two people working on the same thing), with each submission requiring notes that describe the important changes. Get into the habit of clearly describing important changes in each version, not only for others but also for yourself should you wish to revert a scene to before a significant change. Submissions to version control should be almost as common as you choose to save locally to ensure you have the maximum amount of options when reverting to a previous animation direction or to debug something technical that went wrong in an earlier version of the scene.

97

(Courtesy of Nomada Studio.)

Interview: Adrián Miguel Animation Lead—GRIS

There are quite a few GRIS line-art tests available online. How do you prototype before moving onto the coloring/polish phase? Our process goes something like this. We decide the action that the player character/NPC will do. Then, our art director and I talk about keyframes. Sometimes he has an idea for a very specific silhouette, other times I’ll make a keyframe that depicts the general movement and the art director will make some adjustments just to make sure the tone of the keyframe is consistent with the rest of the game. At Nomada, we try to move away from too much power fantasy stuff. Instead, we try to find kind of sober and elegant poses that sell that artistic vibe the studio is going for, and that’s why this part of the process is quite important for us. Then it is pretty straight forward. I’ll take the key frame, make a really rough animation and send it to the creative/art director. If he likes the general flow, then we usually take the rough animation and put it right on the engine to see if the animation needs any further refinements (maybe the silhouette doesn’t match the jump’s trajectory, or maybe the run speed animation is too slow or too fast …).

99

Interview: Adrián Miguel I'll then make any change required and step into the next phase of the pipeline. Our Clean Up department is composed mostly out of junior illustrators. So, to make sure we don't give them more work than needed, our Inbetween team and I would tighten up the drawing in each animation from rough to a mostly clean pass. This way the clean-up department only worries about the line being super clean and the character’s final colors and composition with layer effects, and don’t need to check for the character model’s proportions or anything like that. How do you attach gameplay elements like collision/hitboxes or sound/VFX event tags to your 2D animations? Nomada is a little particular about the way it treats its artists. If there’s some “busy work” that programing can do (attaching collisions, importing sprite sheets into the engine, etc.) in order to keep us artists free to produce more art for the game, they’ll handle it. So there are a lot of gameplay elements attached to the animations (collisions, tons of particle and sound effects events …), but I didn’t personally put them there. I did give feedback when something felt off, but that didn’t happen often, as our programmers are really good and have a great sense for game-feel. We didn’t use any animation systems like Blend Trees, Mecanim or Unity Transitions. All the animation system is code-based and was programmed by us. This means that each time we put an animation into the game, our programmers would hardcode, frame by frame, when an animation could be canceled by another animation. Then, the game would transition to this other animation in specific frames depending on the situation, which was also hardcoded! So, for example, if the player is in the frame 4 of the run cycle and decides to stop, the game would load the “break animation” in the frame, let’s say 5, which looks good as the character was in the “passing” phase of the run cycle. But if the player decides to stop while on frame 12, then the game would load the “break animation” in frame 2, which looks better this way because the legs were spread in that particular moment of the run cycle. This was hardcoded, frame by frame, for all the animations in the game. Of course, this adds extra work, but it gave us complete control over the animation transitions, and this is what makes GRIS animations so fluid and yet control so responsive. What is your chosen animation software and why? For GRIS we worked with Photoshop mainly because our art and creative director has a long career in Illustration, so if he wants a really specific look he naturally thinks in terms of this software. I knew how to make really basic animations in Photoshop, so it was an easy choice. There were some obvious advantages. For one, the whole art team worked on Photoshop, so it was really easy to make changes and give feedback using the same type of files. Also, it was really easy for newcomers and internships to get the hang of the pipeline, as is it much easier to find junior clean-up artists proficient in Photoshop than any other animation software. GRIS’ signature motions involve a lot of magical cloth. Do you have any specific recommendations about creating cloth animations in 2D, particularly with relation to cycling? Wave animation, or undulations, is mostly what I did for GRIS. Not only for cloth, but for the main antagonist in the game too, which is this creature made of animated black tar. And it is so tricky! There was a ton of nuance to it. Depending on the length and speed of the wave, you’d do this typical “C” to “S” to “Inverted C” shape. But sometimes, if the length of the wave was really short and subtle, I’ll have to “undulate a line” without changing its shape. 100

Interview: Adrián Miguel The basic problem that I ran into was that getting the spacing right was a matter of trial and error. You can understand the principle, but to nail it down into each and every specific animation required a ton of tweaks. It gets harder once you introduce cycling into it, as a lot of the time I animated straight forward, making each drawing after the last. This usually meant that the last frame and the first one didn’t match at all, and I would end up changing the first frame, and consequently the rest of the cycle! I feel you can plan the undulation or draw it straight forward guided by instinct, but no matter what, you’re gonna end up tweaking the spacing guided only by your eyes. Eventually, I got so used to it that now I can do it in an instinctive way. But there were some amazing side effects for getting used to this animation principle: Transitions! I think transitions are a luxury that not many 2D games can have. Usually, in 3D, you can use blends to ease the transitions from one animation to another, which broadly (and I mean broadly) means the computer “does it for you.” But in 2D traditional animation, if you want a transition, then you have to do it from scratch. GRIS is a game that only has one main playable character in a world that’s beautiful, but it’s quite empty in terms of NPCs. So we had the luxury to make as many animations as we wanted for our player character. And one of the main tools I used to make the transitions flow was cloth waves. Each transition has at least one undulation on the dress (sometimes two if the movement was really violent) that kind of “whipped” softly into the undulation of the cycle that we’re transitioning in. This, mixed with the ability to transition into any chosen frame of any cycle, is what makes GRIS’ fluid-like signature style.

101

Chapter 7

Our Project: Pre Production For this section of the book, we’re going to look at the video game production process from an animator’s standpoint in a more sequential manner, to better help you understand what challenges you’ll face if you’re fortunate enough to find yourself on a project from the very beginning, on a hypothetical new intellectual property; in the games industry, it is perhaps the rarest and most sought-after role—to be given a blank slate on which to make your mark on the gaming landscape. For our purposes, we’re going to assume a third-person story-based action game, as that is perhaps the most comprehensive project one can imagine from an animation standpoint. While small studios and single-person indie teams offer more opportunity to land this enviable position due to the one-team-one-project nature of their productions, large AAA games typically begin with a small “core team” that lays the groundwork for the rest of the team as they roll onto the project after completing others within the studio. Multi-team studios typically stagger development so that, as one game ships and the team has a well-earned break, they can return to ramp up on a game already underway.

103

Game Anim: Video Game Animation Explained If you find yourself on a core team as an animator, you will be expected to begin researching animation styles, working closely with your teammates to define not only who the player character is, but how they move. Long in advance of actually getting a character playable in the game, animators have a unique set of skills that allow them to mock-up or “previsualize” how this might look in video form—creating a visual blueprint for motion and even design and gameplay ideas. Think of previz as the moving equivalent of concept art, which can contain elements like motion style, speed, camerawork, and unique gameplay features. Importantly, as with much of game development in the experimental phase, expect none of this exploratory work to make it into the game. As such, polished art and animation should be furthest from your thoughts. Mock-ups should feature temporary characters with temporary blocked-in animation only. Remember, you’re not only experimenting with animation but also game design ideas that should come about from discussion with designers. Keeping everything rough and loose should allow you to work fast and make changes on the fly—such is the exciting and exploratory nature of game design at this stage in a project.

Style References You wouldn’t start a difficult animation without a reference, and there’s nothing more open (or terrifying) than a blank Maya scene on a game that doesn’t even exist yet. Before jumping straight into creating, discuss with the team leadership and your teammates what references and tent-poles you wish to draw upon for the game you’ll be envisioning. Oftentimes, this will involve the core team watching relevant movies, playing inspirational games together, and sharing related images and websites, so you all have the same experiences upon which to draw and refer to later. At this stage, you’re looking to create a common language when discussing the vision of the game, so it’s a good idea to collect your reference videos, and Google searches into a central folder structure or wiki that can be easily shared not only among the current team but future members as they join the project. For example, you can have separate wiki pages for the various elements (combat, navigation, etc.) of your game with videos embedded in each, or a folder for animation reference that contains folders for combat, navigation videos, and so on. The importance of this reference library you’re building is so that you can easily link to or pull up a video in a discussion about a feature when you wish to have a great example of how it might look, or for how your character might move overall. As a reminder, however, the worst art is based on other art. Make sure your entire vision is not based on other fictional or artistic references, or your animation (and entire game) is doomed to be limited by the original work. If you want your weapon handling to be exceptional, then only looking at other shooting games means your first-person weapon animations will likely be no better than theirs. Look for unique video examples. Try to find uncommon 104

Our Project: Pre-production

Brütal Legend’s rapid pose-to-pose style fits the volume of work. (Courtesy of Electronic Arts.) actions or military styles you haven’t seen elsewhere. Even better, meet with weapon experts or go through basic training yourself. If your game relies heavily on weapons, then you should always have dummy weapons in the studio to illustrate and try out ideas throughout the project. Getting away from a screen as much as possible can only help at this tentative stage of the project. Adding more reference to this library and more original, non-video game, movie, or comic-book reference upon which you can draw will stand you in great stead throughout the entire project. Reference gathering should continue throughout the game development cycle and should be an important first step in every new element you wish to add. This can only help in setting your animation apart from the crowd.

Defining a Style Now that you have a variety of references upon which to draw, the style in which you animate is an important artistic trait to lock down early, as it cannot easily be changed midway through production. A number of questions that arise might be: • What kinds of characters make up your cast? • Are the characters to move realistically or display superhuman actions? • Can they jump around with abandon, or are they weighted and grounded in the world? • Considering style and workload, might the project require mocap? • If so, how will you treat it once delivered—shall it maintain a realistic style or only be used as a base for exaggerated posing? • If the characters are cartoony, how does that translate into their movement? • If the setting is heavy on violence, will the tone be helped or hindered by unbelievable character movements? 105

Game Anim: Video Game Animation Explained • Similarly, if the story is to carry some weight, then how seriously should actors be directed at the mocap shoots? • If nonrealistic, which of a plethora of potential styles will be assumed?

Comparisons The easiest way to discuss options like those above is to draw comparisons to other works. Is it more or less gory than game X? We like the cartoony style of game Y, but we want to exaggerate the characters even further. Is it a mash-up between the visuals of movies A and B with a little of C? These are quick ways to ensure the team is on a similar page with regards to visuals before anything has even been created. If looking for something entirely original, then, again, looking outside the world of movies and games will yield better results. A puzzle game in the style of M.C. Escher is something most can understand with a single image, while taking character design cues from Cirque du Soleil will require everyone unfamiliar with their performances to be introduced to them.

Realism vs Stylized Again, comparisons are the best vernacular to agree upon directions to investigate, but can only be arrived at with iteration. It’s easy to state we want 35% realistic and 65% cartoony, but that means nothing until the character is moving in a digestible previsualization and able to be modified.

Who Is the Character? Familiar characters from movies and television are the easiest way to explain your character to those unfamiliar with the project, but if you want something original, you’re best if you also investigate historical and nonfictional characters of notability and imagine who they would be as people. Many great real-life stories and people involved in them are the inspirations for the best books and movies, so a wider range of experience on which to draw will only help inform not only your character creation process but acting choices, too.

Previz Previsualization is the single best tool animators have at their disposal in the early stages of a project when there are minimal assets to work with and the possibilities of what the game might be are wide open. Animators are in the best position to mock-up what any gameplay scenario might play like in visual form by using any means at their disposal to grab temporary assets and animate to a faux gameplay camera to illustrate ideas.

Gameplay Mock-Ups Animators, with a basic understanding of simple modeling techniques, can create basic blocky environments and gameplay situations that, while looking rough, can create what is essentially moving concept art of any given idea 106

Our Project: Pre-production

Animators can mock-up gameplay to prove ideas. they and anyone on the team can imagine. This serves not only to begin to answer questions about how the game might look and play, but also motivate and excite others on the team before a single line of code is written. This can be something as simple as how a series of jumps might look for different height/distance variations, to a complex combat system, to a brandnew gameplay mechanic never before seen in video games—anything is possible! If set up efficiently with an eye for reuse, these temporary assets can be the basis of the first pass of actual animations exported into the game to begin prototyping with. At the very least, they will inform the animator how to best create and export these animations when required. As for available characters to animate, these can be work-in-progress proxy meshes built by the character artists or completely throwaway assets from previous projects. A sequel benefits from directly starting from where the previous iteration left off, and similarly, in the case of a new IP, older assets are generally phased out over time as new character art comes online, with only initial prototypes and previz created with the last project’s assets as a starting point.

The more projects a studio successfully completes, the more quickly it will become valuable and create asset libraries to be shared across not just team members on the same project, but across multiple projects, especially given their similarities or whether they’re sequels. Asset libraries are a boon, especially when beginning a new project, whereby characters, props, or even mocap from previous projects are incredibly useful in the prototype phase so animators can begin previzing immediately. Libraries of mocap (ideally raw and untouched) speed up prototyping times and can also reduce the budget on the full game by avoiding having to recapture similar actions repeatedly.

107

Game Anim: Video Game Animation Explained A standard camera setup that mimics that of a third-person gameplay camera involves parenting a camera to an object to be used as a pivot (usually a simple locator) that is itself point/position constrained to the player character so it moves, but does not rotate, with the character. This not only recreates the behavior of a third-person gameplay camera as it pivots around the character, but also moves with the character as they move through the environment. Be sure to set up the wider camera FOV and position as close to the actual/desired gameplay camera as possible for accurate gamelike visuals. Find more on camera settings later in Chapter 10, “Our Project: Cinematics and Facial.”

Previz gameplay camera rig setup.

Target Footage Once enough work has been done to visually describe individual gameplay ideas, a single long take of pre-rendered target footage that mocks up a typical exciting few minutes of gameplay works wonders for galvanizing the early core team and coalesces them on how the eventual game might look. Working with other team members to make decisions on early character designs and environments takes the team one step closer to the finished look of the game and provides not only great motivation early on in the project but also something to onboard new hires as they arrive later by quickly getting them up to speed. As this is purely an art/design task (nothing is expected to be playable in the game, though it can be rendered in the game engine if it’s easier), it frees up programming and other more technical disciplines to make headway on getting the basics of the actual game up and running in concert with the target footage, with the goal of being ready to begin real prototyping once done. Note that as with most of game development, the nonlinear nature of production means that the order of these early tasks as described in this book (or whether they are done at all) is very fluid and dependent on budget and staffing schedules. 108

Our Project: Pre-production

The Last of Us was in part pitched via this mock-up gameplay footage. (Courtesy of Sony Interactive Entertainment) A key visual element of the target footage is that in order to be successful, it must not look like a movie or cutscene, but instead, as much as possible, camerawork should aim to mimic that of the gameplay cameras. Gameplay actions, especially those of the player, must strive to at least look possible in a gameplay context, as they will be expected to be created in the final game. This is a great time for tough decisions to be made about designs as to their viability, as they will first be seen in the target footage. That said, no new game design idea was ever created without going out on a limb into the somewhat unknown, so this early concept phase is the time to dream big. It’s desirable to push this target footage beyond standard conventions with a view to some questions being solved later, especially if the gameplay being shown has never been attempted before and so has no pre-existing counterpart. One of the hardest technical aspects of the creation of target footage is that whereas gameplay animations are typically made of many small cycles and transitions this element of the project is one long animation from start to finish. This makes it very difficult to make changes and additions later on as the character moves through the environment, especially if those changes result in the character moving to a different location. As such, like all previz, it is imperative to keep the character’s position in the world loose as long as possible until it can be locked down before polishing. While gameplay rarely has camera cuts, use any cuts available to divide the animation up into multiple scenes to be shared among animators whenever possible, using fast camera motions to mask seams in the absence of a cut. 109

Game Anim: Video Game Animation Explained

Repeat cycles in the DCCs sequence editor. One trick to reducing the complexity of one long animation that moves through a 3D space is to create versions of all the basic movement cycles and other animations that will be seen repeatedly in the target footage and play them on the characters while they are animated to move through the world. This naturally has the benefit of simultaneously building the required animations to be put into the actual game during the prototyping phase.

Prototyping Prototyping game systems is the next stage of game development following previz. While gameplay prototyping can begin with simply a pen and paper, the game animator will come into play when it’s time to get things into the engine. Importantly, when prototyping, it’s important to move fast without getting too bogged down with technical and artistic constraints. Animations need not look any better than the previz stage—in fact, a game idea can often be proven out without characters or movement, using only primitive objects and onscreen text to describe actions, but proxy art and animations can only help. Elements can be imported into the engine without having to worry too much about cleanliness and organization of assets due to an expectation that all this work will be thrown away, though this is a good time to start establishing some initial file management. Using any character or pre-created animation assets from the previz stage, the game animator should aim to get a character up and running in the engine with basic movement to at least allow designers to move around the environment. This enables level designers to get an idea for the scale of spaces as tweaks are made to character size and speed. Being heavily involved at this stage allows an animator time to become more familiar with the engine, export, and character setup process if new to the tools. From there, the real prototyping can begin for gameplay systems that are as-yet unknown in terms of how exactly they’ll work. Here, typically animation and design (though possibly animation programmers at this stage) will work hand in hand to add custom moves without real concern for scalability (a final system will likely require many more animations than a prototype), to get things playable as quickly as possible with minimal turnaround. At this stage, iteration is key, so systems ideally remain light with minimal asset requirements. The animator should expect to be making changes to the already-exported animations daily at a rapid pace in order to work toward that elusive fun factor. 110

Our Project: Pre-production

Prototyping with the mannequin provided with Unreal Engine 4. Copyright 2018 Epic Games, Inc. Because speed is of the essence, it may be worthwhile investigating prototyping outside of the game engine to be used for the final game, which can often be bloated and bogged down with legacy assets—especially AAA engines. For example, one way of moving fast is to use a pre-existing framework such as Unreal’s “Blueprints” to hit the ground running, temporarily using free online assets with basic systems such as movement already in place.

Pitching the Game The primary goal of animation in pre-production is to answer as many of the major questions about the project that can arise via previz and prototypes. This leaves the team in the best position to begin drawing up realistic schedules and lists of required animations for all the various planned systems. Toward the end of pre-production comes the project “green-light” pitch to a prospective publisher, upper studio management, or even a crowdfunding site to be granted the necessary budget to start the project in earnest. Here the animators and their ability to previsualize a game before it starts can be an integral resource for convincing others to give the go-ahead (or part with their cash). Gone are the days of static documentation and game ideas written on the back of a napkin (in the AAA space, at least). Now, in order to get a project green-lit, the team, after some time in the idea-generating conception phase, must lay out the entire high-level plan for a game’s creation in which the target footage is the jewel in the crown. Nothing convinces executives like a moving visualization. In addition, any additional previsualization and prototypes that are suitably explanatory will greatly support the typical pitch package consisting of 111

Game Anim: Video Game Animation Explained PowerPoint presentations covering the team’s high-level design aspirations as well as expected timeline, budgetary requirements, and sales goals. Time permitting, it is worth cleaning up the rendering of any useful previz videos for a consistent look to maximize the chances of success.

Early gameplay prototypes of Horizon: Zero Dawn’s robot dinosaurs. (Courtesy of Sony Interactive Entertainment.)

Here, the animation director will be expected to set out the vision for the animation style and why they arrived at their artistic choices. Is the animation style realistic or not? How much will the game rely on mocap, if at all, and how is the animation director planning to manipulate it for the final look? What is the plan for storytelling and exposition? Without target footage, it will at least be expected to have examples of fully polished sequences and vignettes that clearly illustrate the expectations to even the untrained eye. This will all be accompanied by art concepts of key characters that back up the stylistic choices of the animation, with the art director or character lead presenting similar high-level goals for characters to be animated. All in all, the pitch is a snapshot of the entire game from the perspective of all disciplines that should excite anyone lucky enough to watch. Like the target footage, the pitch is useful throughout the project as new team members arrive, as well as a good anchor to remind the team of their original intent and steer them throughout the wild and wonderful ride of an entire game project.

112

Our Project: Pre-production

Lab Zero’s beautiful indivisible playable prototype proved out both visuals and gameplay systems. (Courtesy of 505 Games) If all goes well and the project is green-lit, the team will move into full production. At this point, the team size will very likely grow to accommodate the more specialized roles now required, not least because the volume of assets to be created necessitates many talented team members. But before this book moves onto that stage, there is one very important animation area whose team members will have been working simultaneously throughout pre-production to ensure the animation team can ramp up in size and begin creating real assets that will make it into the final game—that of technical animation.

113

(Courtesy of Eric Chahi.)

Interview: Eric Chahi Creator—Another World

Another World was a standard-bearer for in-engine cutscenes with a lengthy intro and many inserts during gameplay long before the term “cinematic” became an overused adjective for cutscene-heavy games. What drew you in that direction when plotting out the story? As strange as it may seem, the story came later because the game was made progressively, level by level. It was therefore an overall desire to create a game with a cinematic style that oriented me in that direction. My technological realization was when I played the Amiga port of the Dragon’s Lair arcade game. This game was a faithful adaptation of the original LaserDisc game. The animations were huge; the characters took up threequarters of the screen. Their technological genius was to stream on disks! Six disks! I told myself it would be possible to save an enormous amount of memory by drawing images with 2D polygons. After a few technical tests, I realized that it was possible, though when I spoke with others they did not quite understand. Another World is often assumed to be heavily rotoscoped like its contemporaries of the time, but while you started with that approach for the opening cinematic, you then transitioned to purely keyframe. What led you to drop rotoscopy (and yet still arrive at a very realistic result)? Creating animations in perspective is difficult. Rotoscoping turned out to be very useful for complex animations like the car skidding in the intro. In practice, however, it was very laborious because I had to superimpose an 115

Interview: Eric Chahi image on the background of the Amiga’s screen. Often, the VCR would unpause after a few minutes. I would then have to rewind and reposition the tape. This technique was far less useful for creating profile views or small animations, which meant working down to the pixel. I did that only for Lester’s walking and running, and I had already acquired keyframe experience in previous projects. Building your own editor for flat 2D polygonal animation over pixels was revolutionary in terms of allowing more frames of animation than other games of the time. Did that approach come about as a result of hitting memory limitations on prior projects, an aesthetic choice, or something else entirely? Yes, the objective was to remove technological barriers while also achieving a specific aesthetic for the animated image, the rhythm and dynamics of the design pacing, and being able to cut immediately. Obviously, from a graphics perspective I had to change my pixel artist habits. Having a 2D polygonal expressive image with an angular structure was a big unknown at the start of the project. I inspired myself the same way black-and-white comic-book artists did to create volume with few traits, often only suggesting contours. Additionally, there were only 16 colors in the game. Hence in pixel art I would often use vague colors to create nuances. That was the challenge, to be able to find the right balance, allowing me to convert the character as well as the scenery. It took me one week to choose the color palette for the first level alone! The gameplay often features custom animations created for protagonist Lester, when many games even now use one set of motions for the duration of a game. What led you to feature so many bespoke sequences over generic movement despite it naturally requiring more work? I love diversity, especially when things renew. If there is a pattern, it must be broken in some way. So yes, surprising the player demands special animations. That’s also why there so many kinds of death, too! While a relatively short game overall, the universe created appears to extend beyond the scope of the story with the player merely being a temporary participant in a much larger tale. Why do you feel this “other” world still resonates with players decades later? Probably because of the space left to the player’s imagination. Indeed, the minimalism of the game gives the freedom to build your own world in your mind. In the same way reading a book description is only partial. We have this with the graphics of flat color shapes, and in narrative where the story is suggested by a flow of events without any text. The only dialogue is alien words where, again, you can imagine many things. The other aspect is the pacing. How events happen, moments of acceleration, moments of surprise while playing. It structures all narrative and the gameplay experience. I think it is this nonverbal rhythm and tone that make Another World still resonate today. Especially the use of cinematics and how they are woven into the gameplay. Cinematics are punctuations (not short films) in Another World. Even today, not many games are using short inserts like this.

116

Chapter 8

Our Project: Technical Animation Video game animation is both an artistic and technical challenge. While animators on larger projects aren’t always required to handle the technical side of animation (this instead falling to a department that straddles both animation and character modeling), it helps to understand why things are the way they are in order to see opportunities for improvement. On smaller projects and teams, the animator will likely be expected to handle all aspects of the technical implementation of animation. Regardless of the level of technical responsibility placed on the game animator, an understanding of the entire pipeline transforms regular game animators into indispensable resources as they wear many hats and help solve technical problems. It is very rare nowadays to find roles that require artistic knowledge only, so becoming familiar with what happens to a game animation beyond its artistic creation will open up far more opportunities for any animator and ultimately make them a more capable member of the team.

117

Game Anim: Video Game Animation Explained

Character Setup Before a single frame of animation can be created, first there must be a character to animate. A highly skilled discipline in its own right, there must be lots of communication between the animation and character teams during the creation phase of at least the initial characters on a project, often with technical animators mediating and deriving tasks from these conversations.

An unskinned character modeled in the “T-Pose.”

Modeling Considerations Early considerations that animators should express, many of which can even be anticipated at the concept phase, typically revolve around how the character will deform once skinned and animated. For example: • If they wear hard armor, this can be particularly difficult to maintain should it be placed at key points of articulation such as the shoulders, waist, and groin area. • Can custom rigging be used to alleviate anticipated deformation issues? • If they have high collars, will this restrict head movement without accepting some mesh intersection? • Is there a visual difference between elements of a character’s clothing that denote which areas are hard and soft (stretchable)? 118

Our Project: Technical Animation • Will the character’s arms intersect with the legs, body, or weapons when down by their side, and is this important for your project? (Especially relevant in characters heavily customizable by the player.) • Similarly, if animation is to be shared across multiple characters, what visual limitations should be placed on character design? For example, might there be an upper limit to girth, or should the back remain unobstructed for sword placement when holstered? • Does the silhouette of the character lend itself to 3D? Models are often warped with the wide-FOV gameplay camera, so chunky characters fare far better than slender ones. Are the limbs dynamic and anatomically interesting or simply pipes that look the same from all angles? Can the character type be read by the player in the often-frantic heat of gameplay? • Are there opportunities for overlap and follow-through? Cloth and other moving parts can increase the visual quality of animation greatly, so any opportunities should be highly considered. Beyond that, other considerations will likely relate to workflow, workload, and performance: • What are the memory and performance limitations that might affect bone count, such as how many characters are expected to be onscreen at once? • How many characters do we plan to uniquely animate? • What are the plans for the level-of-detail (LOD) system with relation to bone count? (Bone counts can be reduced on more distant characters, along with their polygon counts). Ultimately, a variety of factors such as distance to camera, character customization, budget, and the priorities of the project will all play into decisions made around character deformation and visual integrity. Assuming we are always pushing for the best possible visual quality, there are many tricks available to the technical animator to achieve this. When working with the different disciplines involved, the most efficient results can be achieved by utilizing a nonlinear workflow between character artists, technical animators, and animators to minimize issues that might lead to rework of the characters over and above what is expected.

Requires Rigging Changes

Concept

Visual review for potential issues

Model

Review Proxy Mesh

Rig

Test Animation Rig

Animation

Requires Modelling Changes

A recommended workflow process for character creation. 119

Game Anim: Video Game Animation Explained While this workflow is harder to schedule for because the amount of rework is variable, it is safe to assume there will be more at the start of the project and less at the end as the workflow stabilizes and most issues are avoided upfront. The primary goal of such an iterative workflow is to avoid issues with the rig or character further down the pipeline that force a decision between potentially large amounts of rework or shipping with visual issues.

Skinning So now we have a beautiful model waiting to be animated from the T-pose, but first it must be attached to the skeleton. This first step, called “skinning,” is the long and thankless task of first attaching the mesh to the skeleton with a blanket pass, then working into the areas that deform undesirably.

The T-pose refers to the pose of characters as they are exported into the game engine, with no animation yet to bring them to life. We no longer model characters in a T-pose with arms horizontal (despite the name sticking) and instead build them at 45 degrees down because we rarely lift the character’s arms above the head, so it made little sense to continue using horizontal as a midway point to model for. Arms are down for the vast majority of animations, so modeling this way allows the skinning to be better polished for the most common poses. Three-dimensional DCCs have greatly improved their default skinning over the years to minimize the amount of manual work required. There are shortcuts such as saving/loading presets and mesh wraps, where a precreated skinned character overlaid on a new asset will aid the auto-generated weights, especially the more closely the two meshes match, as well as

Painting skin weights in Maya. 120

Our Project: Technical Animation third-party plug-ins that often do a better job than the built-in tools. The vertex weighting process may be done individually for low-poly characters, but high-resolution characters demand the use of envelopes (adjustable shapes that can more easily adjust large amounts of vertices at once) and painting tools due to the larger concentration of vertices. Nothing beats the visual eye of a dedicated technical animator making the final adjustments, so sufficient time for skinning each new character should always be included in the schedule. The technical animator will likely use a premade “range-of-motion” (ROM) animation that rotates each joint individually over the course of the timeline so that scrubbing back and forth will move the current area being skinned. If using mocap, then it is worth retaining the warmup range-of-motion data the actors perform to calibrate with the system when a shoot begins for this purpose in addition to a handkeyed one to focus on difficult areas like the fingers. Additional helper bones/joints that retain volume around pinch points such as elbows and knees may be added to the skeleton and skinned to if required, but must be driven by the rig (automated so the animator doesn’t need to move them). As such, it benefits the technical animator to create the rig in concert with the export skeleton as part of the cyclical workflow mentioned earlier.

Volume-aiding helper joints.

Rigging The final result of skinning should be a character that moves seamlessly with the underlying skeleton. However, while the position and rotation values of that skeleton are what gets exported into the game as animation, animators rarely animate it directly. Instead, they manipulate a rig on top that drives the skeleton, affording them more control, custom tricks, and settings. This has the benefit of not only making the animation controls more easily selectable in the DCC viewport, but the team can adjust the underlying export skeleton 121

Game Anim: Video Game Animation Explained without breaking the animation (that resides on the rig). The skinned-to export skeleton almost always has a different hierarchy from the rig to utilize the DCC’s various constraint setups for smart, animator-friendly control that would not be possible animating a skeleton alone.

The animation rig. To learn all about rigging requires a book unto itself, and many have been written on the subject. For the game animator, however, the most important thing to understand is how the unique aspects and attributes of rigs work to make animation creation as fast and frustration-free as possible. Some standard elements that most game animation rigs include: • Animation controls: The ability to select the desired body part is essential, and while some rigs are embedded inside the mesh, many float outside the character, making seeing and selecting them in the 3D viewport easier. • Root (collision) control with autopositioning: A clearly marked control that drives the position of the character’s collision relative to the scene origin. Should be switched between auto-following the character’s center of mass or animated independently. • Picker: A 2D representation of the rig in a separate window, allowing even easier access to all controls and custom attributes. • IK/FK swapping: The ability to animate controls between IK and FK is essential for animations requiring contact with an object or surface, with most animation placing feet on the ground making this a must-have. • Attach points: Bones with controls to attach props such as weapons, allowing them to be animated independently of the character’s hands (or passed between them) when required. • Attribute sliders: Animating attributes on and off or via nonbinary values (usually anywhere between 0 and 1) allows for additional control to custom constraints or even individual facial elements that would be difficult to manipulate otherwise. 122

Our Project: Technical Animation • Deformation helper bones: Semiautomated controls that maintain volume at key pinch points such as elbows or wrists, with the ability to be further controlled by the animator. Control Rig

Skeleton

Export Skeleton

A hierarchy of controls for an even cleaner export separates the exported skeleton from a skeleton driven by the rig, which in turn drives the export skeleton. Ultimately, games can still be made with little or none of the above depending on the game’s simplicity, but complex rigs are essential for highres characters to maintain a high level of fidelity and efficiency in animation creation. Beyond the rigging associated with the character, it should be at the animator’s discretion to make use of custom scene rigging to achieve the desired result as long as the animation still exports correctly. Tricks such as constraining characters to moving objects (such as moving vehicles or simple non-exported locators to allow additional control) are a viable method of achieving the desired result for the scene. This is most often the case because the export process typically ignores all custom constraints and hierarchy additions (at least those that don’t break the skeleton), exporting only the world-space position of the character and the bone rotations, however they are achieved. While rigs for pre-rendered footage like film and television can be broken by the animator to get the desired result, the integrity of realtime game animation rigs and skeletons must generally be maintained to prevent issues when exporting into the engine. At best, a broken rig will not display the animation correctly, and at worst it will not export to the game at all.

By far the best way to attain the most robust and successful rigging for your characters is to ensure the technical animator is in constant contact with the animator(s) who will be using the rig throughout the process, and afterward when they are using it to ensure fixes/changes are swift.

Animation Sharing Standardization and conformity of skeletons allow the easy sharing and reuse of animations across multiple characters inside the game, as the animations map exactly to one another. For this reason, many games share identical skeletons across multiple characters. Due to this, animation schedules should typically consider the number/variety of skeletons rather than unique characters. Even a minor difference between skeletons will require a non-negligible degree of additional work or specific technology (such as real-time 123

Game Anim: Video Game Animation Explained retargeting animations) to account for the differences. While not real time, so a memory hit rather than performance, shared rigs across differentlyproportioned skeletons can allow animation to be easily copied between them, then exported individually as long as the skeletons are similar (same number of limbs, etc.). By conforming rig controls but not skeletons, a project can have a larger variety of character sizes without exploding the workload. For near-identical skeletons such as male/female humanoids, one skeleton can be used for both sexes but skinned differently so as to appear to have different proportions. For example, a feminine character can look slimmer by rigging the limbs and shoulders and so on toward the inside edge of the relevant bones, while the masculine character may be skinned toward the outside to appear of different build but with an identical underlying skeleton. Ultimately, approaches such as these fall to the importance of character variety and fidelity on the project.

File Management Perhaps the most important technical step in preparing for production is a clean file management setup due to the potentially massive number of files required when creating movement and other gameplay systems for a game.

Exported animation files listed in-engine.

File-Naming Conventions First off, clear and consistent naming conventions for animation files are essential due to the large number of them, often requiring multiple 124

Our Project: Technical Animation animators, programmers, or designers to easily be able to find an animation from often huge alphabetically sorted lists, especially on larger projects. At its core, the purpose of any naming convention is to best order groups of related animations together under a convention that makes searching easy and the purpose of a given animation file clear to whoever is reading them. The best way to achieve this is to group animations by character (skeleton type) first, then animation type, then individual animation, working your way from high-level general grouping to specific descriptors at the end. The following example of simple three-descriptor-deep filenames, when ordered alphabetically, groups animations together, first by the skeleton, then by the action type, and then by further details into the action type.

Animation naming conventions for alphabetic grouping. Don’t be tempted to use too many abbreviations (such as fw, bk, lt, and rt for directions) to shorten the names, as the purpose of naming is clarity, not brevity. When using numbers, apply a 0 before 1, 2, 3, and so on to avoid ordering issues after reaching 10 (1, 10, and 11 may come before 2, 3, 4, etc., plus it keeps them orderly and of equal length in a list). The clearest and most popular method of separating descriptors is by using underscores “_” due to their clarity, while hyphens are similarly effective but perhaps slightly less legible. Another popular method is to use camelCase (capitalization of separate words), though be wary due to the visual muddiness with letters featuring ascenders (t, d, f, h, k, l, b), not to mention the common confusion between lowercase L and uppercase i (l vs I). Never, ever use spaces in either animation files or exported file names due to their ability to break folder links.

Most engines do not accept all characters, usually forms of punctuation, so it should be quickly established which characters are illegal when naming. In addition, engines sometimes have a seemingly arbitrary filename character limit. It is worth asking the engine programmer(s) what that may be and 125

Game Anim: Video Game Animation Explained whether it can be extended, as long filenames that better describe the contents of the file or asset are preferable to abbreviations, so names should not be unnecessarily truncated.

Folder Organization The next step after file naming is to begin organizing files into a folder structure that should make sense for anybody looking to find them. Like naming conventions, folder structures will be unique to the kind of project you’re making, so a good start is to ask how many individual files you expect, how might they be grouped, and, importantly, how many unique character animations you anticipate compared to animations being shared across multiple characters. If, for example, all animations are unique to each character and small in number, then it makes sense to simply group all animations under a character-named folder, likely shared with modeling.

Recommended folder structures for animation DCC files. However, as is more often the case, animations shared across multiple characters necessitate naming your top-level folders after the skeleton they are made for, so they would instead use the skeleton name as the top-level folder, with animations separated from models.

126

Our Project: Technical Animation

An alternative structure for animations shared across multiple characters. Yet this is assuming there are multiple skeletons, still with very few animations. If every character shares one skeleton and/or if each skeleton or character has numerous animations, then a better folder structure would include grouping types of move, with the expectation that each type (combat, navigation, climbing, etc.) requires many animations to bring that particular system to life.

Subgroups of folders per system. Ultimately, the process for deciding a folder structure will best mirror that of the file-naming convention whereby it must prioritize searching and listing. Importantly, however, it is better to err on the side of shallow rather than deep folder structures to reduce searching and lengthy file addresses.

127

Game Anim: Video Game Animation Explained

To ensure folders of animation files don’t get clogged with temporary files that are not intended to stay in the game, it is worth saving such files in a folder named “_test” or something similar to reduce clutter. If, as is often the case, something was used in-game and then later removed it is worth moving to a folder named “_cut” or similar rather than deleting it, allowing for easier access to reinstate it, rather than having to sift through version control. The underscore “_” prefix keeps these folders at the top of a list, separate from others.

Referencing An important step to ready files for full production is to properly set up referencing. While not available in all DCCs, it is an essential feature of Maya to allow changes to be made to all animations without batch processing individual files, which must otherwise be used. Referencing is essentially the ability to embed a reference to one file inside another, such that an animation file only contains a link to the model and rig rather than a unique copy of them.

Batch processing, or “batching,” is the process of running a script to perform a repeated task across multiple files within a directory so as to vastly reduce the time and effort it would take to do so manually. Similarly, scripts can be used to automate repetitive tasks within a single scene the animator is working on, which can be as simple as snapping one object to another over a time range to cleaning an entire scene of excess elements that may be causing exporting issues. The more complex an automation process (usually requiring more variables), the more a full-blown tool may be required for the animator to interact with.

This allows the technical animator to work in parallel with the animator, make any requisite changes to the rig, and see them automatically propagate to all animation files created thus far. The changed files will likely need to be re-exported, however, therefore still necessitating a batch export script. As a result, the animation file contains only the animation keys and attributes, greatly reducing its file size on the network, which would quickly add up otherwise. This is a huge bonus when many animations are required, so the savings are exponential. Not only limited to rigs, referencing can be used to bring any file into another, enabling multiple animation scenes to be updated by a master scene with elements relevant to the task such as referenced environment geometry. This 128

Our Project: Technical Animation

Animation File 01

Animation File 02

Animation File 03

Rig File

Rig File

Rig File

Rig File

Rig file referenced inside multiple animation files. is incredibly useful for long or complex scenes that may require more than one animator, for example, if one animator is animating a scene taking place aboard a ship that the characters are parented to, while the actual animation of the ship is done by another animator.

Exporting Once some animation files referencing rigged characters have animation on them, even with a quick first pass or temporary animations, it’s important for the technical animator to run at least one animation (and character) through the entire pipeline to see it in-game. This is to ensure that not only are the rigs and animation being read correctly by the engine, but also to immediately highlight any individual time sinks in the process. Any overlong

Unreal exporter within Maya. 129

Game Anim: Video Game Animation Explained aspects at this stage can cripple a project as they multiply in the hundreds or thousands when iterating requires exporting thousands of times over the course of multiyear projects.

Export Data Format Exporting video game animations has changed little since the advent of 3D, essentially resulting in creating exported files that, when seen in their component parts (viewed through a text editor), contain lists of the bone rotation and/or position values for the entire skeleton for every frame exported. The only major change has been the size of these files as memory (and percentage allocation of the game’s overall memory budget) for animation has increased with bone count. Exported animation file formats are often unique to the game engine of a particular studio, though a degree of standardization has come into place with the industry’s widespread adoption of the “.FBX” file format, synonymous with popular engines like Unreal and Unity. While all exporters share a common collection of bone data, each will have its own quirks and rules to collect that data that must be learned by the animator.

Engine Export Rules It is important for game animators to learn from technical animators what the expected rules governing successful exporting are and to adhere to them. A good series of questions for animators to ask whenever new to a project, studio, or engine is: • • • • •

How do you set the time range? How are characters specified when a scene contains more than one? What can and cannot break the rig? How are cameras and props exported? Is bone scaling considered? (Position and rotation are generally a given, though sometimes position keys are ignored unless specified.) • What additional data can be exported/read by the engine, such as blend shapes? • Are there additional/missing scene elements that can cause an export to fail? Most of these questions are generally only asked when exporting doesn’t work, so it’s a good idea to know upfront for debugging purposes. Sometimes an export is successful but the animation does not look like it does in the DCC. That’s when it’s important to understand why the data may be read incorrectly by the engine. This is almost always user error and not the fault or limitation of the export process. As such, it is preferable for the technical animators and engine programmers to render exporting as potentially error-free as possible so as to unburden the animators. 130

Our Project: Technical Animation Animation Memory and Compression At some point in the pipeline, either during exporting or when the engine builds assets into the game, a process known as compression will be run on the animation values found in the export files to reduce their memory footprint, which will in turn unfortunately reduce the fidelity of the animation in-game. It is worth understanding how to modify these settings on a per-animation basis during production if possible rather than waiting until the project close, when time will be precious. Maintaining an eye on memory budgets throughout the project will reduce the likelihood of having to lower the overall quality of the game’s animation via a blanket compression across all assets as time runs out.

Animation Slicing Sometimes referred to as “cutting up” on export, exporting multiple animations from one scene is commonplace and allows for easier management of several actions required for a single move or system. It is also an essential technique for creating seamless transitions between two bespoke states or cycles by keeping all the relevant animations in one scene so the curves can be manipulated leading into one another. Exporting multiple animations from one scene should be as simple as setting export time ranges to follow one another contiguously as long as all are exported from the same scene.

One animation sliced into multiple on export. In the above example, while transition_a_to_b can likely interrupt cycle_a at any point in the loop, importantly, we know that cycle_b begins at the end of transition_a_to_b, so we can perfectly match the ending velocity of the transition animation into the start of cycle_b.

131

Game Anim: Video Game Animation Explained

In-Engine Work Once the animation is exported, the animator’s work is not done. While it is possible to be a purely artistic game animator who doesn’t touch anything once the animation is exported to the engine, this role is rare these days. A game animator is expected to follow up with all the technical requirements for each animation created, be it maintaining the organization of the animation system they are working on, controlling all the blend times between animations, or going so far as to implement the animations to a playable state via state machines or high-level scripting. At the very least, the game animator should be in close contact with any other discipline that will work on the animation in the engine so as to share information on variables such as timing, interruptibility (animation canceling), required effects, and so on. Designers or programmers often implement the most technical aspects beyond the scope of the animator’s knowledge, but this flows best when the animator is proactive in sharing ideas and desires to take the animation to a finished, shippable state.

Event Tags Gameplay animators may be expected to add tags to the animation in order to trigger certain events on a frame. These can include any event that may be desirable to be under animator control, especially when the animation timing is expected to be changed often, and can include: • Interruption: The animation cannot be interrupted with another until this frame. This ensures animations don’t need to play in their entirety before another is allowed, affording longer settles at the end if uninterrupted. • Facial poses: When multiple characters share body animations despite having different faces, their unique expressions during gameplay may be triggered by tags. • Props: Attaching or detaching, swapping between hands, moving to holsters, and so on. Anything that may be desired over one frame. • Hit-frame: The frame (or range of frames) when a combat attack animation is desired to detect contact. • Visual effects: Usually added by the visual effects artists, but animators may shift the timing should they edit the animation length. • Sound effects: Same as above, but via the audio engineer or designer; again, useful to be editable by animators should they modify the animation timing. Tags such as these can be added in the DCC itself via script or separate tool, or even inside the state machine. The former is often preferable so frames can be visually selected, but changes generally require a re-export and therefore exacerbate iteration times. Whatever the method, it is important for the animator to review by playing the game, especially when adjusting timing that affects gameplay.

132

Our Project: Technical Animation

Animation event editor in-engine.

Blend Timing Engines now commonly offer some kind of visual editing tool to handle all the various states and abilities a game character can perform and move between. The positive side of this is that ultimate control over blending and similar often falls to the animator, with the downside being that it takes time

An in-engine blend tree. 133

Game Anim: Video Game Animation Explained away that the animator would otherwise be animating. But the difference between a 5- or 7-frame blend between key animations can strike a balance between animation fluidity and gameplay response and so must be carefully finessed. Even without a visual editor, it is crucial that as many variables as possible related to game animation be exposed either through code or scripting and never hardcoded by the programmer, as these values are guaranteed to require tuning throughout development. Whatever the method, the animator should either have access to or be in close contact with the designer or programmer who owns these values to maximize fluidity between animations.

Scripting Like state machines, visual editors for basic scripting functions are now found in many game engines, further opening the doorway for animators to manage all aspects of how their animations play. Even relying on classic textbased input, an understanding of how scripts function in the game engine will open up opportunities for the technically minded animator to go further in creating their own systems and test scenarios. At the very least, this knowledge will aid in discussions with designers or programmers tasked with the final implementation and/or debugging of a given system when everyone is using the same vocabulary. Animators who can perform even basic scripting functions are a valuable asset to a team, as, just as with previz, they can go one step further in prototyping their own ideas. The simple ability to trigger animations when desired (such as on a button press or entering a region/volume in a level) can work wonders for illustrating a concept so others get on board with the idea.

Test Levels Rather than waiting for the full game to be built before the animator can test animations, it is essential to create a series of test levels containing elements that trigger the animations you are working on so as to see them in-game. For example, a parkour navigation system would greatly benefit from an obstacle course containing every possible size and kind of obstacle to be navigated across so as to display each and every animation making up the system in actual gameplay context. While test levels are essential for triggering and fine-tuning a variety of animations, they are only a stopgap and should never be fully substituted for testing in the proper game levels. Often there is a discrepancy between the kinds of elements present in the “ideal scenario” of a test level when compared to the more organic and design-oriented final game environments. Many a system that worked perfectly in the clinical test environment falls apart when used in actual gameplay levels, not least because the real context highlights just how often a system (such as feet IK 134

Our Project: Technical Animation

Test level with assault-course example. on slopes, for example) will occur in the game so as to illustrate the requisite bang-for-the-buck value.

Asset Housekeeping While network folders containing asset files can afford to contain temporary work or old and defunct animations, the engine side of asset management must be kept clean of unwanted animations to avoid unnecessary memory usage. Similarly, naming conventions must be adhered to in order to allow anyone tasked with finding an asset the ability to easily search through potentially lengthy lists of exported animations. Maintaining consistency between exported assets and the DCC animation scene files they were exported from only strengthens this. Ensuring the house is in order inside the engine will greatly speed up the workflow when it’s time to revisit polish tasks or fix bugs in the closing stages of a project.

Digital Content Creation Animation Tools Like engines, tools inside the DCC involved in the creation of game animation also vary studio by studio. Some come as standard, but many are custombuilt by either the studio or are plug-ins found online. Below is a list of the handiest tools to speed up your workflow that will be useful on a daily basis and so should be created or sourced online. Find an updated list of animation tools at the related book website: www​.gameanim​.com​/book. • Save/load/copy/paste pose tool: The ability to paste poses from other scenes is not only useful to match common poses like idles throughout the project, but essential for pasting difference/delta poses on layers when working with mocap. 135

Game Anim: Video Game Animation Explained • Save/load/copy/paste animation tool: Same as above, but for animation (usually found in the same tool). Essential when moving animations between scenes. • Pose/animation library: Essential for storing a database of preset poses that are otherwise costly to repeatedly make, such as hand or facial poses, and aids style consistency across animators. Having all previous animation or mocap accessible in a visual library will be a valuable budget saver the more the library grows to avoid duplication of work. • Trajectory tool: The ability to visualize trajectories is essential for debugging unsightly pops in the trajectories and arcs of your animations. Advanced tools even enable animating via repositioning the points on the trajectory themselves. • Snap-over-time tool: Often an animator will need to transfer motion from one object to another in the world-space, independent of hierarchies (so copying/pasting the animation won’t work). Snapping an object to another over a series of keys will prove invaluable. • Time-warp tool: The ability to retime complex motions like motioncapture in a nonuniversal manner (different speeds at different parts— not simply scaling the whole animation) is incredibly useful to get a good feel and pacing in-game animations, particularly mocap.

136

(Courtesy of Sony Interactive Entertainment.)

Interview: Masanobu Tanaka Animation Director—The Last Guardian

The Last Guardian relied heavily on procedural animation to bring Trico to life. Of all the technologies developed for the character, what was the single most important or successful in terms of making the creature appear alive and believable? The animation control for the fixed “look-at” was among the most important technologies we built. Initially, we used rotation from the neck and then the head; however, the problem with this method was that it made Trico look robotic, almost like a toy snake. With guidance from our creative director, Fumito Ueda, we decided upon a method that first fixed the head position and rotation followed by the neck. Real animals have the ability to fix their head position while the body is moving, which was a source of inspiration. We believe that switching to this method allowed us to add more realism to Trico. We named this method shishimai or “lion-dance control” after traditional Japanese lion dancing.

137

Interview: Masanobu Tanaka What, if any, were the unique challenges of a giant quadruped character in terms of navigation around the environment, and what animal references did you use most of all for the many subtle animalistic details on display? Controlling a quadruped animal is no easy task, which we made even more challenging with our desire to build a controller that allowed a huge creature like Trico to squeeze into tight spaces and walk around freely. Major development obstacles included slopes, stairs, corners, and jumping into an abyss. We took great pains to make the movement natural by combining hand-key and procedural animations. In creating Trico’s movements, we referenced cats and dogs, and incorporated adorable animal gestures that were familiar to us. Often it feels like The Last Guardian is the culmination of the studio’s previous projects. What was learned from animating the many colossi in Shadow of the Colossus in terms of bringing weight to Trico? Yes, the experience with Shadow of the Colossus helped us a lot in the creation of The Last Guardian’s animations. However, the colossi were not alive, as it were; thus, their movement was stiff compared to our desire to make Trico’s more fluid and lovable. With Trico, we definitely incorporated our learning about expressing the colossi’s weight while adding a degree of softness and elegance. Trico generated strong responses from players in terms of invoking both empathy and frustration (the latter because he wouldn’t simply obey orders), as real animals are wont to do. How important do you believe “disagreeable” AI is to rendering a convincing character? While I was not in charge of Trico’s AI, my personal opinion is that it’s important to express the AI’s will, not to be disagreeable. However, I think Trico’s AI was not as technically elaborate as to allow for it to have its own will. Instead, we expressed its will with animations, staging, and so forth. With Trico, we emphasized simplicity of AI to allow for easier controls by artists. The Last Guardian creative director, Fumito Ueda, notably has an animation background. Is there a shared animation philosophy that has developed over the course of the studio’s games, and if so, what might it boil down to? We have so many philosophies that it is hard to express them all. Regarding The Last Guardian, we took great reference from Fumito Ueda’s pet cat. Personally, I used to have a dog but never a cat, so I frequently referenced the animations of his cat. Also, we were told to emphasize the expression of weight and eliminate popping between animation transitions, as those errors would spoil all the good parts of the game that we worked so hard to bring to life.

138

Chapter 9

Our Project: Gameplay Animation Gameplay animation is the quintessential area of game animation, as it is the most unique to the medium, always involving game design and technical considerations in concert with the visuals. It is also often the single largest area of the game animation workload as modern gameplay systems necessitate ever-increasing numbers of individual animations be created to remain competitive. This has forced modern game animation teams to swell in ranks as the medium’s progress (and player expectations) necessitate quality gameplay animations the player will be viewing repeatedly for the majority of a game’s full runtime. Thankfully, gameplay animation is perhaps the most fun and experimental aspect of a medium that is already forging the next “cutting edge” with every new technological advance. A great gameplay animator will not only produce visually appealing work but be involved in clever tricks to reduce or automate some of the less glamorous and repetitive aspects of data creation, leaving more time to focus on the animations that count. Ultimately, however, it doesn’t matter how animations are created, as the scheduling and underlying systems are all but invisible to players—all that matters is the end result as they enjoy the game with controller in hand. An established way to evaluate 139

Game Anim: Video Game Animation Explained how players experience the gameplay animations are what have become known as the Three Cs.

The Three Cs 1. Camera: The players’ window into the game world should never behave undesirably and always afford them the best view of what they need to play. The most successful gameplay cameras match the feel of the player character in terms of response (rotation speed) while never being adversely affected by the character or environment. Variables such as speed, lag, field-of-view, and offset from the character all determine the perspective from which the player experiences the game, so much so that entire game genres are defined by their chosen cameras. 2. Character: The character’s movements and performance should reflect their personality and backstory, enabling the player’s roleplaying in the game world. Memorable characters are a cornerstone of the most successful games series ever, from not just a visual but often an ability and story standpoint. Quite how much the character is pre-defined vs a blank canvas on which the player projects their own self is up to the project, as both approaches suit different ends. 3. Control: Control of the character must be effortless and never conflict with the desires of the player, with the right balance of response and complexity (or lack thereof) affording the player access to all the character’s abilities. The most successful control schemes allow players to enter a flow state such that the player’s own self-awareness melts away as they directly command their in-game avatar, transporting the player directly into the game world for hours. Essentially, improving each of these three core pillars of gameplay animation will only enhance the first and most important game animation fundamental— how the game “feels.” While every game will contain different combinations of (and sometimes entirely unique) gameplay mechanics, we’re going to look at some of the most common animation systems found in most third-person action games for our hypothetical project. This should provide a framework to start thinking like a gameplay animator and hopefully extrapolate toward creating entirely new and unique gameplay animation systems.

Gameplay Cameras The control of the character by way of the animation systems, combined with how the camera behaves, are the player’s conduit to the game world. Just a few decades ago with the advent of 3D games came the necessity for a camera control method, first using mouse-look on PC and later via a second thumb-stick on consoles.

140

Our Project: Gameplay Cameras are often the domain of design, and for good reason. A badly handled gameplay camera can greatly detract from the player’s experience of any game. However, it is important for the animator to at least be involved in the relationship between player character animations and the camera. After all, this is the way the player will view the movement, and both camera and character should “feel” fairly matched in terms of response. Often, design’s desire for situational awareness by pulling the camera out to see more around the player will conflict with the experience of actually embodying the avatar, so a conversation must be had to decide upon the view that best matches the goals of the project. Is the game a cinematic experience with character-heavy narrative, or a more competitive experience where gameplay advantage overrides everything? The more a camera moves with the character, the more it will give a more subjective viewpoint, like the player is embodying the character, while a camera that is more disembodied will feel more objective, like the player is observing.

Settings and Variables With so many different kinds of 3D game cameras (and many games have different cameras for different situations), here are the settings for a thirdperson orbiting camera for our hypothetical project. This is perhaps the most complex camera setup, with at least some of the variables shared by all games. As with the gameplay animations themselves, these values should be tweaked throughout development for the best experience: • Distance/offset: Places the character onscreen, either at the center or to one side, defining how much of the screen real estate is occupied by the player character and how much the character obscures the player’s view. Along with FOV, this variable dictates how large or small the character appears on the screen. In addition, the further away a camera sits, the more likely it will intersect with the environment as the character explores it. Typically, the camera features collision to overcome this, pushing into the character when backed against a wall, for example. • Field-of-view: Not to be confused with distance, instead how zoomed in/ out the camera lens is set. Wider angles (usually maxed out at 90 degrees) allow more peripheral vision but warp characters and environments (especially at the screen edge) and render distant objects much smaller than otherwise. Narrower angles look more cinematic, with the narrowest reserved for cutscenes and other storytelling elements where gameplay awareness is less required. Wider FOVs convince players they are moving through the environment faster than narrower ones due to passing objects and environments changing size at a much faster rate than in the “flattened” look produced with narrower angles. • Arc: Essentially, the distance and FOV settings that vary as the vertical angle changes from being centered behind the character. Often, the arc will be adjusted such that the camera moves in closer to the character as the player looks upward so as to reduce intersection with the floor, and 141

Game Anim: Video Game Animation Explained vertically offset when looking downward so the player can view more of the ground beneath the character as they walk—important for navigating difficult terrain.

The gameplay camera arc. • Pivot: The pivot is often the point on the character around which the camera rotates. Usually offset from the character’s root/collision position with a degree of lag for smoother translation, it should never be attached directly to a joint on the character; otherwise, that region of the body will always remain static onscreen regardless of the animation. Pivoting around the character promotes a sense of the viewpoint being from that character as they remain in relatively the same position onscreen. • Rotation speed: Almost always customizable by the player, the default should be set so that it works well with the pacing of general character movement. Too fast and the camera will feel flighty and hard to control, too slow, and it leaves players at a disadvantage as they struggle to see potential hazards in time. • Rotational acceleration/deceleration: Some degree of momentum given to the camera’s rotation so that it doesn’t start or stop rotating at full speed the instant control input is given, giving a softer look and feel and behaving in a less jerky and unwieldy fashion. Too much rotational inertia can leave the camera moving sluggishly, but completely removing acceleration/deceleration can feel cheap and unpolished. These values should be tweaked considering the necessity for precise aiming vs more broad-stroke reorientation to allow the player smooth navigation of the game world. • Positional dampening: The lag or delay of the camera as it follows a character so that it is not stuck rigidly to the character’s movement, giving the character some play from the default position on the screen. Increasing this gives more of an ethereal follow-camera feel, as if the player is merely a viewer along for the ride, while reducing it too much will leave camera motion jerky as it jumps with every controller input. 142

Our Project: Gameplay Camera-Shake A general camera-shake can work wonders for breathing life into otherwise static cameras, especially when increasing the movement relative to the character’s speed, as it removes the occurrence of dead (static) pixels onscreen at any point in the game. Too much, however, and it can induce motion sickness in certain players, so it should ideally come with the option to be reduced in the game’s settings. Importantly, all but the noisiest camera-shakes should be animated primarily using rotation rather than position. Imagine a camera held on the shoulder of a camera operator who is breathing rather than weirdly bending their knees up and down (which would necessitate positional animation). Motions like running should add a little camera bob up and down as the camera operator runs to catch up, but should still be mostly sold with camera rotation. Be aware of the difference between moving and rotating a follow-cam for shake purposes. Rotating will move the entire view, whereas moving the camera will only affect the nearby (foreground) objects in the view. The latter is more prone to causing motion sickness and should be minimized if used at all. In order to avoid camera-shake adversely affecting camera aiming, roll should be utilized more than pitch or yaw, as it doesn’t affect the aim point at the center of the screen. An essential addition for action animations (either triggered through events or animated by hand) is to use camera-shake to increase the effect of impacts. As a general rule, the camera should again rotate (not move) to increase the motion being shown across the screen axis upon which the motion is

Utilize directional camera-shake to enhance the action onscreen. 143

Game Anim: Video Game Animation Explained occurring. For example, a character landing hard from a fall will be traveling vertically onscreen downward from top to bottom. When the character hits the ground, the camera should initially rotate upward before settling back down so that the character continues to move down on the screen, only now the entire world moves down with them to enhance the impact.

Ground Movement For most games, the first system (and therefore animations) to be created for a production will be ground movement—essentially allowing the player’s avatar to move around the world. This is essential, as it allows all other disciplines, from design to art to audio, to begin testing and experiencing their own areas inside the game world as they move around inside it.

The All-Important Idle Animation Cycles are employed heavily at the beginning stage of production, and the initial cycle for any project the second the animator obtains a character to play with should be the “idle,” where the character stands awaiting player input. There’s nothing more of a wasted opportunity than a lazy bouncing or breathing idle—this is an opportunity to imbue as much of the character’s personality as possible, so much so that players can make assumptions about their character simply by seeing the character standing doing nothing. Equally important is that many other animations and transitions will start and/or end in this pose, so it’s important to think a lot about how it’ll look early on— changing an idle late in production will likely have a knock-on effect for many other animations. A first pass of this animation exported into the game should be a single-frame pose to test out how other animations move to and from it. Asymmetry looks much more appealing than a character facing straight on, so choose a dominant leg to go forward. If the character is right-handed (usually denoted by a weapon hand), then it makes more sense to have the left foot forward so the dominant hand is already pulled back somewhat, ready to punch, draw a gun, or swing a sword. Note that every related animation will have to return to this foot forward, which may prove difficult to match when stopping from a walk or run but is still preferable to bland symmetry. (A solution is to mirror the idle in-game depending on the stopping foot if possible.)

Be wary of too wide a stance for your chosen idle. Legs akimbo makes it difficult to transition from idle to motion and back again without overly drawing attention to the feet sliding or repositioning. It looks even more unsightly when relying on straight blends between animations, too. A strong stance that doesn’t invite issues needn’t place feet much farther than shoulder-width apart.

144

Our Project: Gameplay There may be multiple idle animations, one for each state, such as ambient, combat, crouching, and so on. Each idle should satisfy similar conditions (e.g., same foot forward) to easily blend or transition between one other. Different characters that share most animations can still display unique idles if we blend across from a base idle to the unique stance whenever the character stops, though only if the feet match, to prevent foot-sliding during the blend. Later in the production, the character idle can be further improved with the insertion of “cycle breakers”—one-off incidental animations, played at random intervals, that break up the loop and are further opportunities for displaying the character’s persona. Cycle breakers can’t be too complex, however, as the player must be able to exit at any time, so they mustn’t stray too far from the idle’s silhouette lest they blend out awkwardly. Testing randomly playing animations like cycle breakers in-game is time-consuming as you wait around to see the one you want to review. When editing a randomized animation, temporarily set its likelihood of playing to 100% as well as upping the frequency.

Seamlessly Looping Walk/Run Cycles From the idle naturally come the first movement animations—usually the walk and/or run. Again, this is an opportunity to give players more information about the character they are playing, so try to convey a mood with the main navigation cycles that supports the character’s idle. It is highly likely that the player run cycle will be the animation viewed most in the entire game by a wide margin, so it needs to look flawless by the end of the project. There’s nothing saying that a run cycle must only be two steps—making it four, six, or eight enhances the uniqueness of any cycle but makes maintaining and polishing it more time-consuming. Be careful, however: if it is handled incorrectly and the cycle has individual elements that are too noticeable, then the player will become aware of the repeating rhythm. It is necessary to create overall interesting cycle motions with no one part standing out more than the others. It is recommended to export a walk/run cycle starting from the passing pose (one foot up, one foot down under the character’s center of gravity) with the idle’s back foot up. This aids any blending or transitions that will take the foot from idle to motion as it moves the foot forward during the blend. Blending to a contact pose (both legs extended with toe and heel down, respectively) is undesirable, as the feet will noticeably slide when starting to move.

145

Game Anim: Video Game Animation Explained

Feet contact pose (left) vs passing pose (right). A cardinal sin of movement cycles (or any cycling motion, for that matter) is when the player can see the start or end of a loop because either the motion hitches (pops) or all body parts arrive at a noticeable pose simultaneously. This must be avoided at all costs, with a good habit being to ensure all curves in a cycle start and end with the same tangents. A worse offender is when the start and end keys don’t even match and so pop horrendously, which is unacceptable.

Even start/end tangents not matching will cause a cycle to pop. The best way to remove the likelihood of a noticeable pose with all limbs arriving at a key simultaneously is to offset the keys to some but not all body parts. Grabbing the start and end keys of a motion on the desired limbs such as the arm swing or foot movement will alleviate this. An alternative is to adjust tangents, so the same effect is provided by the tangents overshooting the keys ( and ). 146

Our Project: Gameplay

Tangents set to overshoot to provide overlap.

An invaluable aid for removing unsightly cycle pops is to visualize the curves before and after your keys. This can be enabled in Maya’s Graph Editor by first checking View > Infinity, then Curves > Pre Infinity > Cycle and Curves > Post Infinity > Cycle from the menu.

Curves set to show infinity. 147

Game Anim: Video Game Animation Explained Animating Forward vs In Place There are two approaches to animating navigation cycles, depending on export requirements. One is that the cycle is moving forward in the scene, such that the feet are static on the ground as the body shifts its weight forward. The other sees the character instead static at the origin, with the feet sliding underneath the character in an oscillating looping motion. Whichever approach the team chooses is usually dependent on whether the animation system for their chosen engine uses animation-driven or physics-driven movement, where the in-game translation is taken from the animation or controlled by code, respectively. Quality-wise, however, the best results are usually found by moving the character forward, as it is better to establish the speed of the character and work within the limits of feet position and leg length to prevent foot-sliding than to use guesswork. The negative side of this is that it’s difficult to match start and end poses, as well as to preview player character animations as the player will see it because the camera won’t move with the character to easily show the cycle loop. The best of both worlds is to have a top-level parent node such as a master control that is simply linearly animated backward so as to view the character cycling in place. This can easily be switched on and off by adding that simple animation to a layer that can be hidden when desired for exporting.

Top-level control moving backward to counter character moving forward. 148

Our Project: Gameplay

The bane of many game animators, foot-sliding is most often the result of a mismatch between an animation being created for a particular speed, but the character moving at a different (usually faster) speed in the game. Sometimes it’s just unavoidable, as the game demands speeds that would make a character look ridiculous running that fast. Other times, however, it simply takes communication between the animator and designer tasked with balancing character speeds. As ever, expect things to change, so aim to polish as late as possible so you can revisit animations where speeds have been modified for gameplay purposes.

Inclines, Turning, and Exponential Growth Implementing alternative versions of the core movement animations to account for inclines will go a long way toward further grounding your character in the world. Having the character use more effort, leaning forward and pumping their arms as they run uphill or leaning back so as not to trip and fall when going downhill is as simple as checking for the normal (the polygon’s facing angle) of the ground beneath the character’s feet and blending out the regular run for these alternative incline-appropriate versions depending on the angle. Similarly, replacing the run with a left or right turning version that leads with the head and leans into the turn is an equally simple blend if you can detect the turning radius of the player character. Beware of where the lean pivots from in this case, as often games simply rotate from the base of the feet, which causes the character to appear to almost topple over like a bowling pin. For best results, swing the legs outward in a turn animation, so the character pivots from the center of gravity nearer the pelvis. Even better, why not blend into the new alternative animations on a body-part basis? Doing so first with the head, then the body down gives an even better impression of the character leading with the head and the rest following. There are many other opportunities for blending between different animations with different blend times per body part that break up the obviousness of a full-body blend and ultimately look more natural. The flipside of increasing fidelity by creating entire alternative animations to play under certain circumstances is, as you might have already guessed, that the number of animations grows exponentially. You’ll be doing alternatives not just for run but for all the walk animations, too. And what about crouch variants? The same goes for transitions—each new addition for walk generally means the same is required for run, crouch, and even strafe. The point at which we stop is dictated only by the scope or fidelity of animation required for the project.

Strafing Strafe systems, common in shooting games that require the character to move in multiple directions while facing forward, differ from regular ground 149

Game Anim: Video Game Animation Explained movement in that, rather than rotating the character to face the direction they are heading in, they continuously blend between different directional walk and run cycles depending on the angle of the player relative to the camera direction. Beyond matching foot-fall timing, the biggest hurdle to overcome is always when the body must twist at some point as the leg motion changes from a forward action to backward.

Directional strafe animations for every 45 degrees, featuring both forward and backward moving cycles. Cycling animations can only successfully blend between similar synchronized motions. The forward run can blend with a “forward” run at 90 degrees, and the backward run can only do so with a “backward” version of the same angle. Because a forward and backward motion cannot blend (essentially canceling each other out), at some point there must be a switch. While there have been a variety of solutions over the years with varying degrees of success and complexity, games typically overcome this issue by covering more than just the eight directions at every 45 degrees. Instead, they use additional animations to provide overlapping coverage for the problem areas to afford time to blend. The point at which the hips must switch will vary depending on the actual twist of the character when running forward. Many games featuring twohanded guns, for example, have the character already twisted to hold the weapon, so the problem area will be offset from the image above. 150

Our Project: Gameplay

Overlapping strafe coverage with both forward and backward motions beyond the 90-degree angles.

Starts, Stops, and Other Transitions Directional starts, stops, and 180-degree turns will add an extra layer of polish to a ground movement system beyond creating the basic walk and run cycles. Simply blending from an idle to a run can look cheap without the expected lean forward and push off to show effort and acceleration. For even better results, instead of blending directly to a run, instead first go to a looping acceleration animation (hunkered down, arms pumping—giving the appearance of constantly accelerating, but must loop at run speed), then blend across from that into the regular run. That will give the character an extra level of believability, as we need the character to hit top speed in a matter of frames for gameplay purposes but can’t put the full acceleration in the start animation due to its brevity. Just giving the visual impression the character is still ramping up to full speed despite already being there is how we achieve both gameplay response and fluidity. Starts also improve greatly when displaying a variety of versions to cover pushing off in all directions, with blends between them. Similar to strafing, two different 180 degrees of coverage will be able to blend seamlessly, as pushing right will start with the right leg and vice versa. To solve the overlapping angles, as with strafing crossovers, it is important to still rotate the character underneath the animation to avoid locking the start animation into an angle for its duration—something that will adversely affect the feel. Rotating the character underneath directional coverage animations is a great way to cover all angles without the player noticing.

151

Game Anim: Video Game Animation Explained

Coverage radius of directional starts. Starts and stops in particular go a long way to giving characters a feeling of inertia as they lean into the start and back from the stop. Making these animations overlong, however, can really encumber the feel of the character, and worse, become infuriating if the player is required to quickly avoid obstacles or may fall to their death off a cliff without precise stops. As such, starts should almost immediately get to full speed, only giving the visual impression of acceleration. Stops must similarly involve the visual reaction of stopping, albeit stopping almost on a dime. Ultimately, the distance allowed by start and stop animations will be dictated by the needs of your project and must be balanced between that and the visual results.

Regardless of the feel or method of your game’s starts and stops (or really any motion through the world), one thing that is never up for question is that the results must always be consistent so the player can make assumptions on how the character will control and handle the character appropriately.

152

Our Project: Gameplay It is bad form to have different start or stop lengths based on actions beyond the player’s control, like which foot the character is currently on or similar deciding factors. Controlling characters should always be consistent in feel with modifiers (such as icy floors), equally consistent and clearly visible and understandable by the player.

While characters can be made to simply rotate, 180-degree-turn transitions are another visual nicety to show the shift of weight as a character plants their feet to turn backward when the player changes direction. Here again, overlong animations or those that cannot be rotated underneath will punish the player and cause them to choose to instead carve out an arc rather than trigger a punitive animation. A common problem with starts and stops is when the player nudges the stick briefly to reposition the character. If the movement system requires these transitions to play in full, then a simple nudge can take the character several meters. This is especially an issue when turning 180 degrees on-the-spot back and forth quickly. The solution is to make starts, stops, and 180-degrees turns easily interruptible by one another, by way of event tags or cutting the animations into even smaller transitions—the smaller the better for more response.

A simple test to see how any game handles its start, stop, and 180-degree transitions is to waggle the controller’s thumb-stick rapidly and see how the character handles it. The worst examples will leave the character uncontrollable, while better ones allow for precise repositioning.

Ten Common Walk/Run Cycle Mistakes Convincing cycles are some of the hardest animations to create, not least because they will be seen over and over again by the player. Here are ten common issues to watch out for when creating cycles, with some helpful tips on how to avoid them: 1. IK arms: Use FK for arms that are just swinging and not touching anything static; otherwise, you’ll be counteranimating them every time you adjust joints further up the chain, such as the root or spine. 2. Foot-sliding: The feet must match the rate of forward movement, or they will slide along underneath the character. 3. Feet not moving continuously: Feet move backward at a constant rate when touching the ground during a cycle. Any time they deviate from this will cause foot-sliding. (Only an issue for cycles animated in place.) 4. Overly asymmetrical: Unless the character is limping, limbs should move similarly and with a constant 1–2 rhythm. Speed up your process by 153

Game Anim: Video Game Animation Explained animating one leg, then copying to the other and offsetting by half a cycle. 5. Animated in one axis only: Be sure to check your animation by orbiting around rather than viewing solely in a front or side view. Junior animators often forget to have the character move side to side as the character’s weight shifts from foot to foot. 6. Gliding forward linearly: Every time we push off with a foot, we gain a little momentum until the next push. This should be reflected in the forward/ backward oscillating motion of the root. 7. Not leaning into momentum: We walk/run forward by pushing and falling forward before catching ourselves for the next push. Leaning too far back (or not leaning at all) will look unnatural, especially at speed. 8. Leg hyperextension: Using IK can sometimes cause the legs to stretch at the extremes of a cycle. Ensure the foot rolls enough such that it can still move constantly on the ground while reaching the pelvis without overstretching. 9. Feet back motion under-animated: Feet really roll back and up after kicking off, especially in a run. Underanimating this motion is a common mistake, so err on the side of exaggeration. 10. Arms swinging backwards: Floppy arms rolling in a backward cycling motion are another common issue. Ensure arms are tight during runs and are offset no more than a frame or two from the leg motion, as the two drive each other almost in sync.

Jumping Games that place importance on exploration often do not merely exist on a flat plane, so players commonly expect to be able to jump over obstacles or up to higher ledges, even going so far as to be fully climbable in some instances as the game design permits. As such, jumping and climbing are a common system to be developed in part by the game animator, and there are a variety of challenges to overcome in the quest to make the interactive world more vertical. As with ground movement in the game, the actual motion of a jump can be done either through physics calculations or pre-created animations. While the former is more easily tweakable by design, the latter allows the animation to best match the action and is usually created as if the character jumps off a high ledge, continuing to fall endlessly until interrupted with a landing. The best solution is usually a combination of both, where the animator can animate with a jump arc in mind, but the arc is eventually handled in-game by code.

Arcs The arc of a jump is usually a simple parabolic curve, as with the classic bouncing-ball animation exercise; however, it can be given more character and a better “feel” if manipulated by an animator. Such an arc can have more 154

Our Project: Gameplay

Jump animation sequence typically involves a jump to fall loop. hang time or a harder descent toward the end if desired. Any modification of a straight parabola will improve the uniqueness of the game’s feel under player control. Regardless of the chosen arc, the key factor is to ensure the character’s actions match the point in the jump arc: stretching out on the upward motion, pinching (or flipping/somersaulting as desired) at the apex, then descending with legs stretched out preparing to land toward the end (the expected ground plane).

Take-Off In a traditional animation, there would be an expected anticipation of crouching before the jump, but any anticipation before something as commonly performed as a jump will feel sluggish under player control. Here it’s possible to further use blending to our advantage by squashing the first frame of the jump take-off so low as to compress the character enough that they would look broken if players ever saw that frame on its own, but they won’t because it’s being quickly blended into from the current action. This will cause the character to crouch down even as they are blending across to the start of the jump, all the while still blending out the previous animation. Creating unseen but exaggerated “broken” poses and animations to be used as part of a blending solution is yet another tool in the game animator’s toolkit to achieve visual fidelity without sacrificing feel or fluidity. 155

Game Anim: Video Game Animation Explained

Use an unseen “broken” frame to push poses further when blending. In addition, the jump animation itself can somewhat compensate for lacking anticipation by featuring dragging elements like arms, legs, and any secondary objects at the very start, causing them to blend into this most extreme pose rather than be animated backward, giving the impression of dragging from wherever the jump was triggered.

Landing Landing is almost always a separate animation that intersects the jump when the ground is detected, and more often than not shows different variations of impact depending on how far the character fell (often with a death after a certain height). Unlike takeoffs that show a character’s effort before jumping, the landing is required to show the weight of the character, and its absence breaks the believability in the physics of the world. A tricky issue, then, is the potential loss of control when a character lands, especially when the player desires to continue moving forward. To solve this, we can create additional landing animations for movement, such as walking and running lands, and potentially even directional changes during the landing impact for highest fidelity without sacrificing response. A common issue with landing animations losing their impact is a delay of a frame or two as the character hits the ground, then detects it, then blends from the fall animation into the landing. While blends should always be minimal, you can again push the character lower than realistically possible during the blend to achieve the desired result. Importantly, be sure to work with the programmer implementing the landing animation to minimize delays in ground detection for maximum impact. While at the programmer’s desk, ray checks ahead of or below the character when falling can further anticipate the ground and be used to cause the character to extend their legs out as someone would in reality. Anticipating 156

Our Project: Gameplay impacts can be used elsewhere, such as when running into a wall, so this technique quickly becomes a great way to increase animation fidelity across the board.

With the ability to anticipate comes the ability to transition seamlessly into a variety of poses or gameplay states in advance, like jumping to a wall or a swing bar, or even simply detecting how high on a ledge the player might land and bracing for an appropriate knee, stomach, or hang landing. If the game is more realistic and features no air control, jumps can essentially be predetermined actions, so detection can take place at the point of take-off. This way, entirely different types of jumps, such as different height and length variants, can be played to perfectly match the desired jump and further promote the illusion of the character’s awareness of the world around them.

Climbing and Mantling Climbing differs from jumping in that we can almost always know the height of the object to be climbed over/onto. As such, the most typical way of handling climbing in games is to create a suite of preset animations for all heights deemed climbable by design. Here is perhaps the strongest argument for standardizing the height and size of environmental elements in a game, not just so that animations work but also so the player can read an area and know which elements can be climbed over and which are too high.

Height Variations and Metrics Just as with blending cycles, similar animations here can be parametrically blended with one another to cover every height between the authored heights as long as the animations match each other well enough. Fifty

1–2-m and 2–3-m parametrically blended climb heights. 157

Game Anim: Video Game Animation Explained percent of a 1-m climb and 50% of a similar 3-m climb should produce a decent 2-m climb animation, though the finer the granularity in animation coverage, the better. Just remember that if the 1-m and the 3-m animations are significantly different, then we require two 2-m animations, one matching the 1 m and the other matching the 3 m to cover all heights between 1 and 3 m.

Collision Considerations While collision is useful to detect the ground during jumps, it becomes a burden during climb animations, as it often cannot intersect with the environment being climbed up and over. Typically, animations touching the environment will move the character so close as to necessitate leaving the collision cylinder, with the collision needing to be animated independently of the character so as to not get stuck. Turning collision off, then on, during an action is possible, but not always recommended should the action be interrupted and the character ends up stuck inside the environment geometry when collision returns.

Cut Points and Key Poses The workload can be lessened by cutting the animations into component parts such that the end of every climb height passes through a similar pose, such as one knee up, from which a variety of endings or transitions can play out. Landing on a knee, for example, can also work for jump landings that fall short of a ledge and must be climbed afterward. Conforming to poses through different systems such as this reduces the amount of work overall as well as maintaining a visual consistency.

Alignment A key feature of climbing vs other looser navigation animation systems is that certain frames usually display exact contact points with the environment. If a character’s hands do not exactly grasp the rungs of a ladder, it will not look good at all, just as if characters intersect with the environment because they aren’t correctly facing a wall to climb over. To solve this, the animations must be authored not only with standard environment metrics, but there must also be alignment performed at runtime to move/rotate the character into the object to be interacted with. The exact position and orientation of the ladder or other obstacle (or more likely an offset from it) must be assumed by the character’s collision as they enter the climb system so that contact points will match. For non-distinct obstacles such as long walls that can be vaulted at any point or the top of a cliff the character is hanging from, these environmental features are generally marked up as “climbable” by design along their length. Marking up the world in such a way improves the character’s ability to detect opportunities for custom-distinct animations like climbing, but can also 158

Our Project: Gameplay be used for any bespoke action, such as opening a door, to provide more interactivity with the world.

Attack Animations It is a basic truism that most games, in requiring a challenge to be overcome in order to progress, default to a binary kill-or-be-killed interaction with most NPCs. Until the medium matures and finds equally satisfying (yet more meaningful) interactions with the characters we create, there is nothing more visceral in game animation than the visual spectacle of combat, and nothing yet that makes players feel they’re impacting the game world as much as causing enemies to take hits, then fall around them. While all animations contain the traditional elements of anticipation, action, and follow-through, nowhere in game animation are these elements so intrinsically linked to game design than in combat attack animations. Each of the three stages strongly contribute not only to the feel and fluidity of the character’s action, but also the gameplay balance—especially in competitive multiplayer games.

Anticipation, attack, and recovery phases.

Anticipation vs Response Anticipation, by rote design logic, is always to be minimized such that the action players desire to perform is as instantaneous as if they did it themselves. While it is important to make the connection between player and avatar as seamless as possible so as to maximize the feeling of inhabiting the character, the gap between player input and result is where much of the opportunity for gameplay “feel” exists—not to mention that actions with zero anticipation forfeit a large degree of weight. A car that can turn on a dime leaves no challenge for the player to navigate a racecourse where juggling speed and turn angle to take the best line possible is the crux of a racing game’s fun. This, too, exists in an action that cannot instantly be executed due to visually supported situations or previous decisions such as your avatar being immobilized after being knocked down to the floor or still swinging the previous punch. Therein lies a large degree of the strategy and essential challenge of character control gameplay. 159

Game Anim: Video Game Animation Explained

Video game characters generally lie on a line of fast, weak and light vs slow, strong and heavy, as displayed by Final Fight’s character archetypes. (Courtesy of Capcom.) Despite a standard push toward response, there is an empirical understanding by both developers and players that the stronger/heavier an action, the longer we can play with the anticipation. When wielding a heavier weapon like a club or giant ax, or simply when bare-knuckling with a larger character, the player understands that attacks may land with less frequency but should expect them to hit with more damage commensurate with the visuals. The reverse is true for faster, more rapid attacks or those thrown by a character with smaller stature. This promotes a player-understood gameplay balance of fast (weak, small, or light) vs slow (strong, large, or heavy). Character designs, and therefore the animations supporting their visual representation and these adjectives, generally fall somewhere along a scale where these two groups of traits are at the two extremes. And yet, with anticipation, we’re often still only talking about the difference between a 1- or 2-frame anticipation and a slightly longer animation (beyond outlier games that build their whole core mechanics on reading lengthy anticipation). As with jumping, relying on blends to first get to an anticipation pose then follow-through with the attack can quickly feel unresponsive. Blend times into attacks should therefore be kept to an absolute minimum of only a couple of frames if at all, with an aim to provide just an impression of

Slow Strong

Fast Weak

Large Heavy

Small Light

Fast, weak, small, or light vs slow, strong, large, or heavy. 160

Our Project: Gameplay an anticipation pose and then taking the action to a contact frame as quickly as possible.

Visual Feedback When we talk about the time it takes to respond in combat, what we really mean is the time to the contact frame where the player sees the result of the input, scoring a hit on the enemy. It is important to understand the difference between a visual response and the time it takes to get to the desired result or contact frame. A car turning a corner at speed may take some time to respond, but the steering wheel is visibly turned instantly. Similarly, if a player character riding a horse instantly pulls on the reins to turn the horse’s head but takes a wider angle to realistically turn, the player still gets immediate feedback on actions taken while visually understanding the reason for a non-instant response. As such, a character immediately pulling back a giant ax then holding that pose to wind up for a powerful move feels far different from one that casually does so over the same number of frames before contact.

On top of anticipation animations, different game engines and rendering pipelines (drawing what the game is doing per frame), as well as HDTVs, add a non-negligible additional delay before the character responds to player input. If a game project consistently requires almost nonexistent anticipation of actions, then it’s worth having programmers investigate what can be done to reduce input lag as well.

Telegraphing Telegraphing, a sports term widely adopted as the NPC equivalent of anticipation, is used by design to alert the player that an action such as an attack is coming and the player must block, evade, counter, or similar. When telegraphing an attack in combat, we exaggerate not only the length of time required to wind up the attack, but also the posing to ensure the player can see it coming, as well as any visual-effect bells and whistles to ram the message home. Unlike in competitive player-vs-player (PvP) combat, we generally favor the player in player vs environment (PvE) situations so as to promote the fantasy of overcoming great odds, when in reality the player has many advantages, not the least of which is the ability to outsmart the AI. PvP telegraphing, while present, is generally on a much smaller scale, with less generous windows of anticipation and interruptability to match real player response times.

Follow-Through and Overlapping Limbs Most of the selling of weight and power in a gameplay attack, without the benefit of long anticipation, is usually done instead in the follow-through by 161

Game Anim: Video Game Animation Explained holding strong poses where the player can continue to read the action just performed. A follow-through action that never assumes a similar pose for a number of frames after impact can feel weak and weightless, so reaching a pose that’s the result of the previous action and holding there for long enough to be read is powerful. This can literally be a still held frame if the animation style allows it, but usually is a pose with some slight continued motion as the major-force-driving body parts come to rest and overlapping actions catch up. This is often called the “recovery” section of a gameplay action because the length of time the player is unable to perform another attack following the initial action is a large part of the risk in a risk/reward design. A common mistake that beginners make in animating attacks is to have the entire body perform the action (such as a sword swing) simultaneously—all arriving at the recovery pose (and every pose throughout), at the same time. Here the concept of “drag” is essential, with more applied depending the power of action or weight of the weapon. Throughout a sword swing, the head should lead, with the shoulders and body next, and last the arm carrying the sword, powerfully swinging only at the end over very few frames. Because anticipation frames are at a premium, this is often only a 1-frame delay between body parts, but it is important to enhance the power of an attack that will look weak otherwise. No person punches with the arm first and then the body catching up afterward; we instead shoulder the punch, moving our body first, with the arm snapping forward at the end. For maximum impact, there should be minimal frames (if any) between the arm’s drawn-back pose and its outstretched contact pose, as any poses in-between will slow and weaken the punch. While we generally start any combination of moves with a nearinstantaneous action, subsequent follow-up moves can afford more anticipation, as the player already understands the second attack cannot

Strong punch wind-up poses in a punch’s recovery phase pre-anticipate the next action in a combo sequence. 162

Our Project: Gameplay always interrupt the first until a given time (via an event tag). As such, a trick is to ensure a combo attack animation recovers in a pose that builds anticipation for the next hit, such as a right hook ending in the left arm already drawn back, ready for the next punch to explode when the player inputs. This works even for repeating the same action (such as throwing knives), where the throw animation ends with a follow-up knife drawn and held back ready to throw as part of its follow-through.

Cutting Up Combos Combinations of moves in sequence, common in even the most basic of melee attack systems, pose a challenge in that after each attack, they branch to two different possibilities—either the next attack in the sequence or back to the combat idle. There are two approaches to creating branching sequences of animations like this: 1. Create each move as its own complete animation (albeit likely within a single file), with each individual attack returning to idle but the second attack onward starting from an appropriate recovery pose of the previous animation. 2. Create an entire sequence of attacks, exported as separate animations, and then only afterward animate the returns to the combat idle to be played when the player doesn’t continue the combo. While both workflows have pros and cons, for the most versatile results, the former approach is recommended because the exact frame at which follow-up attacks can be triggered will likely change with design iteration, meaning the latter complete sequence approach quickly becomes too rigid to work with, as it requires constant updates. The latter approach arguably creates a smoother and more fluid overall sequence if played to completion, and also fits best for mocap that is likely captured all in one sequence. If using motion capture, the best results will come from capturing both a complete combo sequence and individual actions in order to cover all possible parts of the sequence.

Subsequent combo anims should start with the sword further back in the recovery phase. 163

Game Anim: Video Game Animation Explained Regardless of approach, it is important that subsequent actions in any such sequence begin with a start pose chosen from nearer the end of any time range marked for triggering the next animation so as to avoid reaching the end of the time range and jumping backward again for the proceeding action’s start frame. Similarly, whenever blending into the next in a sequence of actions, combat or otherwise, it is important to visualize your character still progressing the animation forward during the blend to avoid unsightly hitches should they blend into a pose that is essentially “behind” where they should be once blended.

Readability of Swipes Over Stabs When considering the actions for your characters, remember the fundamental of readability. As we often cannot guarantee the camera angle from which an attack will be viewed, actions done in only one axis, such as jab punches or sword stabs, will often be either hard to read or completely obscured by the character’s body, especially when viewed from the common angle of behind the character. As such, it is preferable to use swiping motions over pointed stabs, ideally at an angle other than exactly horizontal or vertical, again maximizing their readability. This also matters for visual effects, where often a trail from the weapon or body part can greatly enhance the readability of an attack. Trails from stabbing attacks are virtually nonexistent, as the weapon or arm will overlap the effect, while swipes will leave large arcs across the screen. It is always best to consider the most common camera angle and play to that, ensuring that any gameplay motion can be read in two or more axes, so it is never lost to view.

Damage Animations As with attacks, there are different considerations between player and NPC animations. Damage animations usually scale in severity commensurate with the attack meted out and can have different attributes such as knocking the character into a downed state and therefore further animation considerations. Regardless of the type of attack/damage combination or the overall animation style, it is always better to err on the side of exaggeration over realism so as to make attacks read well for the player—especially when the camera is further from the action depending on game type or simply when attacking enemies from a distance.

Directional and Body-Part Damage Directional attacks that connect with a single unidirectional or, worse, wrong-directional damage action are unsatisfying at best and unsightly at worst. As such, it is always a good idea to at least include a variety of directional/body-part damage animations that can be played to match the direction the attack comes from or body part being hit. 164

Our Project: Gameplay Beyond melee, ballistic weapon damages are often played back based on the part of the body targeted; such is the more precise nature of projectiles. For such games requiring precision damage, characters tend to still use a standard cylinder/pill-shaped collision for navigating the environment while employing a more complex collision for damage detection to better match the character’s shape and silhouette when animated. The number of damage animations can quickly scale up if there must be considerations for body part, direction, and severity. The realism or precision of your game will ultimately determine the granularity of different damage animations, not least because these will also likely be required by all skeleton/ character types. The same will be true of death animations.

Contact Standardization Depending on the game’s fidelity, all high sword swipes, for example, should connect at roughly the same height on the enemy so that the enemy damage animation can be used for multiple high attacks that hit at that same point. In this case, it is advisable to use visual references like locators or other scene geometry to indicate chosen damage points. To ensure consistency once a damage point has been decided upon, it needs to be brought into each different damage animation requiring it. This is a perfect opportunity for referencing so damage locations can be updated if and when the values change. Metrics standardized for any purpose should be referenced into scenes so they can be modified from one central location. Other commonly used examples are those to do with speed and direction. An eight-pointed wireframe circle on the floor can illustrate distances in different directions to be used for directional strafe animations such as cycles and starts/stops. Climb heights, cover objects, ladders, and other standardized environmental props are also good candidates to be referenced in a library of sorts. Doing

An eight-way directional guide and other metric props. 165

Game Anim: Video Game Animation Explained so also ensures that multiple animators working on the same system are obeying the same statistics.

Synced Damages For melee combat when a perfect match of action and reaction is desired, attacks that trigger a unique damage animation on the recipient synced with the attacker are the best option. This is a rigid approach that, on detecting that an attack is or will be successful, the recipient is matched to the position and orientation of the attacker (or vice versa, or a balance thereof), and the damage animation plays in sync from the point of contact. This gives a lot of control to the animator, as both characters will be animated in the same scene rather than potentially separated into standardized damages, and can afford a greater degree of precision. While more predictable and visually impressive, the downsides of this approach are that it increases the workload immensely due to the 1:1 ratio of attacks to damages, and moreover disallows the type of gameplay that promotes attacking multiple enemies simultaneously or via glancing blows that might be possible by more granular methods of detecting and triggering damage animations. This approach, however, is essential for actions that require sustained contact between two characters, such as throws or holds. When contact is required between characters, use of IK is essential. Use locators or similar dummy objects parented to one character at the grabbed body part that can constrain the position and orientation of the grabbing character’s hand IK. Parenting a locator rather than directly constraining hand IK allows the locator to be animated relative to the grabbed body part, with the constraint animated on/off to allow for grabbing and relinquishing the hold.

Simple two-person IK connection setup.

166

Our Project: Gameplay Recovery Timing and Distances Small damages often do not interrupt the character’s current action so as to avoid the frustration a player might experience, for example, being rooted to the spot while attempting to evade enemy fire. Conversely, large attacks might knock the character down, leaving them vulnerable to follow-ups. While the latter is often a simple full-body animation, the former requires some degree of systems support to overlay or combine the damage animation with the current action the character is playing. The most natural candidates for this are additive actions that “flinch” on top of the current animation, but body-part actions can also be used in their absence. For hits to the player, large damage animations are usually only interruptible by other damage animations, as they are essentially a punishment for being unsuccessful. Smaller damage animations should be either short enough or lower on the priority scale such that they can be interrupted by actions that might reduce the chance of being hit again such as dodges, rolls, or other evasive maneuvers. The length of damage animations, at least until the frame at which the player regains control again, is often key to gameplay balancing, especially in fighting games where balancing damage frame counts is as vital as attacking ones.

The interruption priority of all actions is something to be discussed with design, and blend times between different animation combinations should be balanced such that bad examples (midway through a somersault that leaves the character upside-down and would flip 180 degrees if blended) may be better with either no blend at all or require custom transitions.

The travel distance on damage animations can be equally crucial to how a game plays, especially in games that might knock the player off a ledge. Typically, a short stumble that moves both feet is all that’s required to prevent a character from appearing rooted to the spot. Distances must be standardized such that being hit left has the same distance and timing as being hit right even if the actions are themselves different to avoid any disadvantage from taking damage from a particular direction.

Impact Beyond Animation The visceral impact of damage animations, combined with other methods such as impact sounds, visual effects such as blood, or the ever-useful camera-shake, is essential to make up for any shortfall in an attack’s perceived power due to the limitations on anticipation. Working with other departments closely to ensure correct timing to match the desired frame will greatly benefit the final look of the attack and damage animations. 167

Game Anim: Video Game Animation Explained

“Hit-stop” causes the movement of both the player and NPC to freeze for a few frames on successfully landing an attack to further enhance the feeling of connection. This is usually done programmatically, but the illusion can be baked into the attack animation somewhat by enhancing and hanging on the attack pose longer—the downside here being the attack still appears to connect even when it misses. In this instance, attacks may branch into different outcomes, whether they hit or miss.

168

(Courtesy of Autumn Games.)

Interview: Mariel Cartwright Animation Lead—Skullgirls

2D animation in games since the advent of 3D was becoming rarer and rarer but has seen a resurgence with the indie scene. How did you get started and where do you tend to hire most of your animators? I went to CalArts to major in character animation and built my foundations in both 2D and 3D there. I knew then that I wanted to work in games but assumed that any game animation job would have me animating in 3D, so that’s what I focused on (indie games as a scene was still in its infancy). However, the game animation jobs I started getting out of school just happened to be 2D—both pixel and HD—and before long I was full time on Skullgirls, which was all 2D HD animation. At Lab Zero, we typically find our animators on Twitter and Facebook, actually! We have three full-time animators (including myself), and everything else is handled by contractors online. Our pipeline is organized in such a way that we’re able to do the animation/design heavy lifting in house and use contractors we find online to finish the animation we’ve started.

169

Interview: Mariel Cartwright Fighting games have gameplay response timed to the individual keyframe. What is your process for animation creation knowing that every frame counts? We get our animations in-game to test as early as we can. Often, this meant testing out barely legible scribbles. Once it’s in the hands of our designer, though, he sees how it feels and tells the animator how he wants it timed, so the animator can go back and make those adjustments. It helps in our case that our designer has a decent sense of animation timing—he’ll remove or hold frames to get the timing he wants— and then the animator can take that info and just finish the animation up. What are the advantages of using Photoshop for your workflow instead of a dedicated 2D animation software? I always preface any dialogue about Photoshop as an animation tool with a disclaimer—Photoshop is not the easiest tool to animate in. However, there are definitely reasons we use it. One is that for each frame of line art, we need to have a separate flat color layer underneath it that’s the silhouette of the frame—this makes it so you don’t just see through the line art into the background. Photoshop actions make generating these frames a largely automatic process, which is super helpful. We also have Photoshop scripts to help us export frames in the format we need. For contracting, Photoshop is the safest bet as far as a tool most any artist will be familiar with. And finally, at this point, it’s just what we know. It can be tricky teaching it to animators who are familiar with better tools, but in house, using it is second nature to our animators. How did you settle on your established anime drawing style and why do you believe that Japanese aesthetic is so popular in the West? For me, it’s just what I grew up around and had been into. I actually tried a few times to draw more Western, if you can call it that, but it never felt natural to me and I fell back into my anime-styled stuff. I think for most people that consider themselves anime fans, it becomes a big part of our lives because of the diverse stories, themes, and styles that we don’t often see in Western animation. There’s still the perception in the West that most animation is for kids, with a few outliers—but there are no such rules or limits in anime. While to me, anime aesthetic is definitely appealing on a surface level, there’s so much more that it can be about beyond the visuals, and I think that’s part of the draw. 2D animation frees the animators from rigging limitations, though it must be harder to stay “on-model.” Are there any other benefits and trade-offs that you can share? There are a lot! I love the ease of quickly scribbling my keys, drawing an exaggerated smear or just creating new things without the worry of designing, modeling, or rigging. It’s also great to be able to animate transforming or shapeshifting, or inject personality into facial expressions easily, with your primarily limitation being what you’re able to draw. However, there are definitely drawbacks. Designs have to be made with ease of creating drawn animation in mind; things can’t be too complex, difficult, or time-consuming to draw. Keeping things on-model is definitely an issue, and fixing model issues can add up to take quite a while. It’s also incredibly difficult to do things like skins or different outfits— anything of the sort requires almost complete reanimating, since every frame is uniquely drawn (though we have done it!). While sometimes we’d love some of the luxuries that a rig could provide, to our team, the tradeoff is worth it for the fun and charm we get into our animations.

170

Chapter 10

Our Project: Cinematics and Facial Story, once an afterthought in the majority of game releases, is now the central hook that drives many players through many a game experience as they desire more depth and meaning from their chosen pastime. While a great story can be conveyed in games via something as simple as text on the screen, the animator’s role will only aid in creating memorable moments for the player, often with dialogue delivered by the characters. This is the only time we consistently bring cameras close to show off faces, and there is an arms race in the industry to have the most “believable” (not just “realistic”) characters to tell these stories. Each time the bar is raised, the effort to match it across the industry increases, and so too does the importance of animation being taken seriously at any game studio with storytelling ambitions.

171

Game Anim: Video Game Animation Explained Quality cinematic cutscenes are perhaps the single most expensive element that can be added to a game. With mocap, voiceover, and high-resolution characters, not to mention the writing and all the technical expertise required to pull all this together into a cinematic pipeline, it is a project in itself. Conversely, there are no other elements of game production that players expect to be able to skip with a button press, such is the divisive nature of linear non-interactive storytelling when juxtaposed with the dynamic interactivity of the rest of the medium. So when telling stories, why do we use cutscenes at all? The simple answer is twofold. 1. Unlike practically every other element of the game development process, they are quantifiable. Once a pipeline has been created and a few cutscenes made, it is entirely possible to schedule, sometimes to the day, how long it will take to complete these sections of the game. 2. As a storytelling method, the team can focus the player’s attention on what the story is conveying in a fully authored manner with none of the randomness and emergence expected in the rest of the game due to the unpredictability of players interacting with gameplay systems. These two certainties are valuable commodities in a medium where every other production element is in such flux. Plus, with over a century of techniques already figured out in film, there’s a wealth of knowledge out there to maximize the quality of these non-interactive parts of game development. The more complex and honest answer is that we still haven’t found a compelling alternative that will work in a universal manner across all game stories. There have been individual successes at alternative storytelling methods, all bringing with them risk and uncertainties that cutscenes avoid, but no one-size-fits-all solution like cutscenes.

Cinematic Cameras Before discussing the uses of cameras within cinematic language, it is important to understand how cameras work and how they can be manipulated to get the required shots. Three-dimensional cameras in DCCs, and now in freely available game engines, too, do their best to imitate their real-world counterparts down to the last detail.

Field-of-View Field-of-view (FOV) describes the size of the viewing angle we render to the camera; imitating lenses of real-world cameras allows it to move between narrow and wide angles for zooming purposes. Measured in millimeters, high values (∼85 mm) correspond with zoomed-in shots due to the narrow viewing angle, while low values (∼28 mm) instead represent wide shots (note that lens values in the DCC and/or game engine may be represented instead 172

Our Project: Cinematics

Narrow vs wide camera shots. by angle-of-view, so translation is sometimes required because high/low numbers run in reverse). While the complete range of angles is available to be exploited by the game animator for the purposes of storytelling, it is important to consider that wide angles are already the domain of gameplay due to the desire to see more onscreen and therefore maximize situational awareness for the player. This and a history of cutscenes in early 3D games simply using the gameplay camera for cutscenes without adjusting the FOV means that games with a desire to appear more filmlike should tend toward narrower cameras for cutscenes unless absolutely necessary, not to mention that 3D characters and objects simply look more solid when viewed from narrow angles than wide ones with increased perspective warping. A camera pushed further back but zoomed in can display a subject as the same size onscreen as a camera placed close but zoomed out, though the latter will render the subject far more warped toward the edges of the screen. Understanding this should avoid some of the worst cases of game cinematography that mistakenly shove cameras in characters’ faces, showing them under less-than-flattering circumstances. It is recommended for a cinematic lead to create a list of pre-approved lens angles for stylistic consistency among animators. While animators are free to move between angles while zooming in or out it is undesirable to stay on an arbitrary angle value for stylistic consistency.

How field-of-view affects characters,with a “cinematic” narrow field-of-view (left) and a warped ultra-wide angle (right). 173

Game Anim: Video Game Animation Explained Depth-of-Field Essentially a flaw in real-world camera lenses, their ability to draw focus on only a slice of the complete depth field has been reintroduced to virtual cinematography to emulate the desirable stylistic blurring of non-essential elements in a shot. This can be used to draw a player’s focus to your subject by blurring out the foreground and background elements, not to mention looking more pleasing by emulating film.

Ryse: Son of Rome utilized depth-of-field to dramatic effect. (Copyright 2012 Crytek GmbH. All Rights Reserved.) Depth-of-field (DOF) is typically driven by setting a focal-point distance from the camera and then associated values to set the distance before and after that point to determine the field of focus, often with values for fall-off and blur strength. Use of “rack-focus,” the process of animating the focal distance values within a shot, can be used to great effect to change the player’s focus between different elements without cutting. The shallower the focus, the more

Placing characters within the bounds of visual focal planes causes just the background image to blur. 174

Our Project: Cinematics noticeable the effect. This is often used to move between two characters at different distances from the camera to highlight the one talking. Because focus is dependent on fixed values relative to the human eye, it is important to never simply add blur to a shot that would not trigger one in real life, such as in a wide-angle, as the player will detect something is off with the scale of the shot—producing something akin to macro-camera “tilt-shift” photography simulating a miniature scene.

The Virtual Cameraman With the ability in CG to create pretty much any shot by placing cameras anywhere, it is all too easy to stray very far from shots that a real camera operator in live-action can achieve. This can quickly lead to nonsensical camera movements that, once in a while may be exciting, when used en masse will take audiences too far from the constraints established by over a century of cinema and appear amateurish. As with game camera-shake, visualizing an actual cameraperson when creating shots is a great limitation that gives a real sense of presence in scenes that crazy flying and spinning camera shots will not. This also includes mechanical shots such as dollies and cranes that are standard tools in cinema. Even going so far as to imagine your cameraperson in a chase car following an intense driving sequence will work wonders for preserving a grounded viewpoint from the perspective of a real person.

The Five Cs of Cinematography The Five Cs of Cinematography, by Joseph Mascelli, are a great starting point for anyone unfamiliar with basic cinematic camera language, with rules established over his entire career of film, television, and teaching. Put simply, they cover: 1. Camera Angles: The camera is the perspective from which the viewer sees the point of interest or subject of the shot. Wisely choosing the angle can go a long way in supporting the mood or perception of the shot’s subject by changing the camera’s view of the subject or a given scene. This is generally the only time we fully author cameras, wresting control away from the player to enhance the narrative. 2. Continuity: Describes the importance of consistency between shots, as errors can be introduced in a live-action scenario due to actor breaks and similar between shooting. This can be likened to bugs in a cinematic cutscene where noticeable issues occur between different shots, such as transporting characters to different locations, causing lighting and physics pops. 3. Cutting: The editing of different shots such that they flow naturally with no jarring issues that might lead to viewer confusion. Games feature the additional challenge of moving in and out of gameplay cameras, 175

Game Anim: Video Game Animation Explained sometimes bookending cutscenes with jarring transitions that can be avoided with careful consideration. 4. Close-Ups: Intimate shots highlight details rather than an entire scene, enabling subtlety in a manner that standard gameplay cameras struggle with. Focusing in on our game characters’ faces enables the display of emotion and internal thought—more powerful storytelling devices than exposition alone. 5. Composition: The framing of visual elements only further supports storytelling. Subject positioning, lighting, and other elements become vital storytelling elements in the frame. Only possible when cameras and all moving elements are authored, artfully composing shots only aids the presentation of gameplay information and goals to the player.

Cutscene Dos and Don’ts The more that can be learned about cinematography, the more tools animators will have at their disposal when tasked with telling a story in as efficient a manner as possible. There are so many possible camera choices for every situation that a larger cinematic vocabulary will only aid the efficacy of imparting story or gameplay information to the player. That unplayable cutscenes run opposed to the strengths of the interactive medium demands they last no longer than the minimum time required to impart their message before getting the player back into gameplay. Here are some tips to make cutscenes work for the story.

The 180 Rule This defines that subjects should generally remain on a consistent left/ right side of the screen regardless of cut. Breaking this rule is referred to as “crossing the line” and can most often be avoided by authoring all cameras on a single side of a two-person conversation or similar directional scene, always leaving subjects on a consistent screen side.

Cameras should be placed on one side to obey the 180 rule on basic setups. 176

Our Project: Cinematics Cut on an Action This describes cutting, for example, mid-stand as a character stands up rather than after the standing action. This creates a better flow, as the action leads through to the next shot and gives the player’s eyes something to follow. Importantly, ensure the composition such that the action continues in roughly the same place onscreen in both shots straddling the cut to avoid the viewer having to re-find the action.

Straddle Cuts with Camera Motion Cutting between two shots can be softened further by matching a slight camera motion tracking the subject at the end of the first and start of the second shot. For example, again with our mid-standing cut, adding a slight upward motion to the end of the former shot, then finishing this same upward tilt at the start of the latter as the camera comes to a rest in its new position will help lead the eye as the motion flows from one scene to the next.

Trigger Cutscenes on a Player Action When entering cutscenes, especially from gameplay, triggering the cinematic cutscene when the player reaches an unmarked destination is far more jarring than cutting into the cinematic when the player triggers a gameplay action, such as a vault or door opening, with the action continuing after the cut at the cutscene’s beginning. Funnel the player into such actions in collaboration with level design.

Avoid Player in Opening Shot On the occasion we cannot guarantee the manner in which the player might enter a cutscene, do not start the first shot with the player in the frame. Instead, have them enter into it, perhaps beginning by looking at something/ someone else as the character arrives in shot. A player running into a cutscene that immediately shows them walking and vice versa is an undesirable intro easily avoided by this technique.

Use Cuts to Teleport While jarring cuts can disorient the player, smart use of them can be taken advantage of to cheat the player character to a new location in the world. Utilize cuts on actions where we cannot control the player’s location in order to move them to the desired location post-cut.

End Cutscenes Facing the Next Goal Finish cutscenes with cameras oriented toward the player’s next goal whenever possible, giving them a helping hand as to where to go or 177

Game Anim: Video Game Animation Explained what to do next. In the event your game does not support transitions out of cinematics, orient the player in gameplay afterward, ideally with the character ending the cutscene facing that direction.

Avoid Overlapping Game-Critical Information The classic animation principle of staging aims to make an intention as clear as possible to the viewer. This is really where animators must use every trick at their disposal to ensure players get the information they need to know from every scene, shot, and action within them with no excess baggage or confusion. A good rule to remember when developing a scene is to use separate beats to allow each element the player sees time to breathe and be taken in by the viewer rather than overlapping them for expediency. For example, in a car chase cutscene, if the player character’s vehicle is to swerve to avoid crashing and the player needs to be shown the car up ahead they are chasing, ensure it’s not the swerving that reveals the target but instead that they happen sequentially: swerve first, then reveal the goal. This reduces the likelihood of the player missing the goal behind the dynamic action happening during the swerve. Missing key information can be disastrous when it leaves the player unsure what to do next when they return to gameplay.

Acting vs Exposition The actions and expressions of your characters can say much more than their words, and often a simple nod when animated convincingly will express as much as several lines of dialogue. Overwriting has been a failing of many a video game story, though admittedly until the last few generations, writers could not rely on the nuanced acting players now enjoy. Film’s mantra has long been “show, don’t tell” for this purpose. Nowadays, there is no excuse for game writing to represent anything other than natural dialogue between real people when we use physical and facial acting to convey the true intention of characters. The degree of subtlety available to the animator will depend on the fidelity of the character and the emotions they can express, as well as the closeness of the camera to displaying the acting.

Allow Interaction Whenever Possible A particularly egregious mistake in video game cutscenes is when the player’s avatar performs an action that the player can’t do in gameplay. When this occurs, players instead wish they were the one to pull off the exciting move or trigger the event that drives the story forward. Whenever this occurs in your project’s story, ask why at least this element can’t be done by the player in gameplay before returning to the cutscene as necessary. Where film’s mantra is “show, don’t tell,” games are instead “do, don’t show.” 178

Our Project: Cinematics

A standard over-the-shoulder shot during Mass Effect’s conversations. (Courtesy of Electronic Arts.)

Avoid Full-Shot Ease-Ins/Outs A sin among game camera moves is the single-shot camera pan with ease-in and -outs on either end. This is particularly jarring because the player can sense in advance when the shot will cut. A live-action shot would be clipped on either end as part of the editing process and would therefore already be in motion at the start and likely not have come fully to a rest at the end. To recreate this more natural look, ensure your panning shots have a slight movement from the beginning by adjusting the tangent of the initial camera frame of the shot and the same for the ending, if required. Like all in this list, this is not a hard rule, and often the camera move reaches the final position before the end of the shot, but if so, ensure that the camera does not come to a complete stop and instead let it breathe (move) a little, lest it create dead pixels onscreen as they noticeably stop moving.

Track Subjects Naturally When tracking a subject moving in the frame, take care not to lead the action with the camera. Always imagine the camera person responding to the subject’s movements as an observer might and not the other way around. In the case of a character jumping, the character would initiate the jump, then the camera would move to follow and reframe a few frames afterward, causing the jumper to get closer to the top of the screen, perhaps even leaving it slightly, and the camera only catching up and coming to rest sometime after the character has landed again. The degree of following depends on the predictability of the motion, so a character walking from left to right can display more camera leading, as the subject motion is linear. A more dramatic example would be the fast fly-by, where a fast-moving object such as a spaceship approaches and passes the camera before receding into the distance. While the camera can attempt to track the 179

Game Anim: Video Game Animation Explained spaceship and even lose it briefly as it passes by, sometimes a camera cut in such an extreme motion will work even better.

Consider Action Pacing Something for the game animator always to be aware of is the pacing of the cutscene in relation to the gameplay action before and after. For example, a cutscene inserted in a high-energy moment in the middle of a boss battle will appear jarring and kill the sequence’s momentum if shots linger too long on any element. As such, maintain the consistency of rapidity of camera motions and frequency of cuts that match the intensity of not just the visuals onscreen but also the gameplay bookending the cutscene.

Place Save Points After Cutscenes More a technical concern on the design side but something an animator should be aware of nonetheless—the player should never watch a cutscene twice. Some don’t even watch once! As such, cutscenes should never be placed right after a save point, especially if it is right before a notoriously hard section such as a boss fight where player death results in restarting and rewatching the boss intro cutscene. Whenever possible, automatic save points should occur after cutscenes, provided they don’t drop the player into a hazardous situation. In the event this is unavoidable, the cutscene should either be shortened as much as possible or split such that only the ending plays on repeated playing to offer only a lead-in to the gameplay.

Planning Cutscenes Due to their complexity, cutscenes require a degree more planning than most other elements of game animation, ensuring rework and changes are minimized after any time is sunk into their creation due to their costprohibitive nature.

Cutscene Storyboarding Because of their linear nature, it is tempting to borrow the traditional method of storyboarding cutscenes as one would a film or animated feature. This is worthwhile early in a game’s development before any asset has been created, so it can give an idea of the scope of the first scenes to be created with no requirements beyond pen and paper. While still useful for fully keyframe cinematics, storyboards become obsolete more quickly than they can be maintained and updated when motion capture comes into play. Cinematics change and develop organically at the mocap stage as writing, directing, and performance-captured acting all play a part in shaping the final performance, so any rigid direction dictated by a storyboard becomes more a hindrance than a help, not to mention final 180

Our Project: Cinematics camera shots are often best implemented only after the performance has been acquired. The recommended name prefix for cutscenes begins numerically with 010_, 020_, 030_, and so on to list them in order (as some of the few game animation files that can be ordered) while leaving room for additional scenes inserted via intermediary numbered prefixes like 025_.

Cutscene Previsualization Instead of rigid storyboards, previz in a rough 3D layout is often faster and, importantly, easier to update and maintain, not to mention the scene will already exist in some form to be immediately implemented in-game in a rough state, allowing story sections of a game to flow and give an early sense of pacing. Like gameplay previz, cinematic previz helps answer questions ahead of cutscene creation or mocap shoots and allows the camera to scout the 3D location of the scene (if already created), looking for the best shots and points of interest. This all helps the actors or animators create the scene in visualizing the performance ahead of time. No more than camera positions and moving T-posed characters are required for loose composition, all with the understanding that on the shoot day, actors are still free to come up with the best performance and the previz cameras, layout, and editing are simply adjusted to accommodate. Performance is key, so actors should be constrained as little as possible.

Cutscene Workload The standard way to calculate the amount of animation work required to create a cutscene is scene length measured in seconds, with that number divided by how many seconds the animator/team is expected to produce in a week. This number will vary depending on the animator output, driven by not just skill but tools and workflow, but most variably by the scene complexity. Complexity is affected most by how many characters it features (two characters naturally take around twice the time as one), but also the nature of the shot (character actions and interactions, facial performance, prop manipulation, or distance to the camera).

Be aware that real-time cutscenes require additional maintenance and so require buffered scheduling toward the end even after completion to ensure they don’t acquire bugs as part of shipping the game. This is not the case for pre-rendered cutscenes that must be finished some time toward the project end before rendering/recording and going to post-process, existing in the game only as video data. 181

Game Anim: Video Game Animation Explained

Fallout 3 concept boards by Adam Adamowicz. Fallout® 3, copyright 2008 Bethesda Softworks LLC, a ZeniMax Media company. All Rights Reserved.

182

Our Project: Cinematics Scene Prioritization While workload can be calculated granularly, cutscenes should generally be grouped into types based on tiered quality, such as gold, silver, and bronze, with the most time and attention being spent on gold scenes, less on silver, and even less on bronze. This ensures that the most effort is spent polishing cinematics that are essential to the experience and less is spent on scenes that are tertiary or can be avoided altogether by the player.

Scenes should take different priority depending on a variety of factors. This approach to game animation in general allows for more sensible scheduling so equal effort is not spent on non-critical-path elements (aspects of the game that are less important or optional to the player) in order to focus on what is front and center and important to the overall game experience.

Cutscene Creation Stages Cutscenes, while still prone to nonlinear editing and changes as with other areas of game development, are still the most linear element of game creation. As such, they can be scheduled with hard deadlines (or gates) that mark each step of the cutscene’s finalization. While every studio is different depending on team structure and budget, a fairly standardized approach to locking down a final cutscene is as follows: • Previz: The initial exploratory and planning phase shows initial character placement and movement through the scene with appropriate camera angles. Can be entered into the game for game continuity. • Mocap/animation pass: Shot mocap replaces previz animation in the scene, then is adjusted and retimed. Camera is re-edited to match and mocap. A pass on eyelines illustrates who is looking at whom/what in the scene. • Polish: Facial mocap is taken to “final.” Contact points between characters and the environment and each other are finalized now that character models should be complete. 183

Game Anim: Video Game Animation Explained • Post: Primarily bug-fixing/maintenance on the animators’ part. Once timing is locked, it allows audio, visual effects (VFX), and lighting to begin working. As with all areas of game development, there is still expected overlap between these distinct phases. For example, animation can be timing-locked at any point during polish so other disciplines can work simultaneously. Importantly, they should be convened to give input and perhaps early rough passes of lighting and VFX earlier in the process to aid the animator in raising the scene’s final quality.

The Eyes Have It With cinematic cutscene cameras drawing us close in to the game’s characters comes the opportunity to imbue an extra layer of life and character with facial acting. Just a few console generations ago, cutscenes aiming to give this level of detail to characters’ performances needed to pre-render higher-quality characters offline (playing the cutscenes back as movies). Nowadays, in-game and cinematic characters can be one and the same so as to only require the creation and maintenance of one character asset, with the biggest difference in the visual quality of cinematics falling solely in the fidelity of cinematic lighting. Contrary to popular belief, the single most important aspect to get right in facial animation is not lip-sync, because, while still important, players will focus primarily on the life or lack thereof in a character’s eyes during closeups—making them essential to focus on first.

Detroit: Become Human recorded facial performances on a massive scale to give the player choice as to how the story plays out. (Courtesy of Sony Interactive Entertainment.)

Eyelines In a cinematic shot, especially when more than one character in a conversation is visible on the screen, it is essential for the eyes to be looking 184

Our Project: Cinematics

Eyelines are essential to show characters looking at one another. where the viewer would expect them to be in order to make the connection between characters. Failing to do so will leave characters gazing off into space or even confuse the player as to who is looking at what, especially when cutting between multiple subjects. As with the 180-degree rule, characters looking to screen-left should cut to a responding character looking to screen-right, even if their head position doesn’t necessarily match. The same goes for eyes looking up or down in shot.

Due to the high polygon density around game characters’ eyes, eyelines can be adversely affected by lighting in this incredibly detailed area such that eyes that appear correct in the DCC may appear to look at a different angle when lighting is applied in the engine due to shadows. As with all game animation, be sure to check the final result in the game before signoff.

IK vs FK Eyes The surest way to make a character’s face appear dead and lifeless is to not animate the eyes at all. The next surest way is to animate the eyes in FK, counteranimating against the head’s movements in order to have them maintain a direction. Known as the “vestibulo-ocular reflex,” real eyes automatically rotate in relation to the head in order to stabilize their view. If, even if for a fraction of a second, your animated eyes “swim” inside the head, they will lose focus and appear lifeless. Constantly moving in and out of phase is incredibly distracting for the player when the eyes should be the centerpiece of a beautifully composed shot, not to mention counteranimating against the head means much more work for the animator, especially if the head animation is still being worked on. 185

Game Anim: Video Game Animation Explained

Glazed eyes in FK vs focused eyes in IK. To make things easy, using an IK (or world vs local rotation) look-at solution is the absolute best way to go, not only because the eyes need only be animated (at least as a first pass) when they change focus to look at something else, but because it allows polish and iteration of the underlying body animation without having to constantly rework the eye direction. In order to keep the eyes “alive” regardless of the method you use, you must always consider saccades.

Saccades Saccades, the jerky rotation of eyes to acquire new targets, impressively move at 900 degrees per second for humans, which means there’s absolutely no excuse for short saccades taking any more than a frame or two depending on style. Even a character simply looking forward at someone that they are talking with displays saccades as their gaze moves from eye to eye, to the hairline and mouth. When the eyes are turning over a large angle to look in an entirely different direction, in real life, the eyes involuntarily do so in steps as they always lock onto something. While this is the most realistic way to animate eyes, it’s not always necessary for such large movements; just make sure the eyes take no more than 2–3 frames to reach their destination (often the left/right limit as the head turns slower), lest they appear lazy and tired. Jumping a large angle will often be accompanied by a blink, not just to avoid having to animate saccades but also to punctuate a change in thought as the character acquires a new subject in their vision. Just make sure not to do it on every single large turn so as not to appear formulaic and unnatural.

Eye Vergence Convergence, when the eyes draw in together as the subject of their focus is closer, is an essential element to include when animating eyelines, especially 186

Our Project: Cinematics when the character is looking anywhere other than far off in the distance (in which case the eyes diverge to look straight ahead—the default in a character’s T-pose).

Eyes converge as objects looked at become closer. A simple way to always account for this is not only to use an IK look-at to animate the eyes, but also to have both eyes focus on the same point in space rather than on a point each, which good facial animation rigs also support. This means that moving the look-at target to the correct distance away will automatically provide the correct convergence and ensure your character appears to be focusing at the correct distance. Not doing so will contribute to the 1000-yard stare sometimes seen on game characters even when eyes are animated. Do not be afraid to converge the eyes greatly when characters are focusing on something really close, as that’s exactly what happens in real life, though when animating to a cinematic camera, it may be necessary to make minor adjustments to increase or decrease on top if need be.

“Pseudo” (Pseudostrabismus) is the false appearance of cross-eyes, mostly in infants, due to the space between their eyes growing temporarily out of proportion—something often occurring in 3D characters when the eyeballs are incorrectly placed in the head. This is something that absolutely must be ironed out with the character team when rigging the face in order to avoid having to compensate with animation throughout the project.

Thought Directions Eyes are often referred to as “the windows to the soul,” so it’s important that they reflect the inner thoughts of the character you’re animating, especially 187

Game Anim: Video Game Animation Explained during dialogue. Even with slight saccades, a character talking while constantly looking forward is not only uninteresting but unnatural. Animate eyes looking away at times from the character their owner is conversing with to highlight changes in thought or pauses between speaking for punctuation. Looking up and to the side might support a character remembering something or searching for an answer, shifting the gaze left and right can illustrate nervousness, and eyes downward will help convey sadness or shyness. The eyes should be the most interesting thing onscreen during close-ups, and every time they change their gaze, it only aids this. After eyes, though, the next most important element during dialogue facial animation is the mouth.

Lip-Sync Lip-sync is easily one of the hardest elements of facial animation to get right, not least because it’s one of the hardest character elements to rig correctly. Just understand that when it does work, it should go unnoticed, with the player’s attention ideally instead drawn to the eyes.

Phonemes In its most basic application, dialogue animation is determined by a series of preset shapes named “phonemes” that when grouped together cover every noise required to produce dialogue. For example, the same shape that produces the “A” sound also does so for “I.” The complete series of pose groups are:

The basic phoneme mouth shapes.

188

Our Project: Cinematics Shape Transitions It’s not as simple as lining up these shapes with their timing in the dialogue (although that is a great first start to ensure correct timing). The mark of a good lip-sync animation is how it transitions between the shapes, as well as which shapes may not hit 100% or be skipped over entirely so as to avoid rapid lip-flapping during fast speech or complex words. Ultimately, the best aid to achieve natural lip-sync is a low-tech solution. A shaving mirror on your desk will allow you to rehearse and reference over and over again as you work through the sequence of lip shapes. Just be sure to talk as naturally or as exaggeratedly as your animation style requires. Unlike most animation, it’s best to err on the side of caution when moving the mouth so as not animate overt lip-flapping.

Facial Action Coding System When keyframing lip-sync, the standard working method is the animator has access to poses easily accessible (in a library or accessible via DCC slider controls) and begins by pasting them in time to match the dialogue audio. Importantly, however, unless the project does not require it due to simplicity or budget, the preset poses should not only be those listed above. Instead, those poses should be created from more granular shapes determined by smaller muscle movements that when combined can reproduce phoneme shapes. In addition, controls are required for every other fine detail required to bring a face to life, such as subtle skin movement around the eyes, nose, forehead, cheeks, and neck, as well as the mouth and jaw. The highest-resolution characters in games often feature hundreds of individual shapes, driven by sliders rather than posed manually, that are derived from the facial action coding system (FACS). The FACS comprises microexpressions forming the large variety of muscle movements possible in the face, originally categorized by Paul Ekman and Wallace Friesen in the late 1970s. The granularity required, and therefore the number of shapes, will be different for each project depending on fidelity and importance of facial animation.

189

Game Anim: Video Game Animation Explained

DCC sliders driving the many FACS poses.

190

Our Project: Cinematics Sharing Facial Animation Unlike standardized body skeletons, it’s likely that different characters in your game will have unique faces, which typically means the one-size-fitsall approach, while perhaps still sharing bone counts and names, will not be available for faces. As such, to create animations that work on multiple characters with different faces, the facial animation must be exported as a series of values corresponding to each character variation’s unique poses. Exporting simple values between 0 and 1 not only allows this animation to save memory (when each pose would otherwise require the position and rotation for each bone in the pose), but allows the simple binary value to play on characters that are set up with entirely different facial rigs, bone counts, and pose shapes entirely. For example, an eyebrow-raise pose on a dragon could also play on a mouse despite the disparities in visuals. As such, an animation comprising poses that match in name if not visuals will play across multiple character types.

Sharing facial animation across characters in Fortnite. (Copyright 2018 Epic Games, Inc.)

Creating Quantities of Facial Animation With facial animation being the most difficult and time-consuming kind of animation to create from scratch, games that are dialogue-heavy such as RPGs or any lengthy story-based adventure often require a solution to help animators save time. Thankfully, there are several software solutions now to generate lip-sync procedurally from a combination of audio files and text input, creating a low-quality solution when quantity is desired over quality. The resulting animation can also be a great base for animators to then work on top of for the most important scenes. Video-based facial capture is now becoming affordable enough for even the tightest game development budget, with software combined with simple hardware like webcams or cell phones producing decent enough results for a first pass (or facial far from the camera) while again providing a better-thannothing place for the animators to start. 191

Game Anim: Video Game Animation Explained Troubleshooting Lip-Sync We know when it’s wrong, but it’s often hard to find the exact reason why something is “off” with lip-sync. Try these simple troubleshooting items for the most common lip-sync issues to quickly eliminate these common causes. • Timing: The most common issue with lip-sync not looking right is the timing being off. Try grabbing the frames around the area in question and shifting them forward or backward. Be sure to playblast/render a video each adjustment to ensure low frame rates in the DCC are not affecting your judgment. When editing performance-captured facial mocap, sometimes the whole animation is simply out of sync and must be adjusted. • Mouth Shapes: Sometimes one or more shapes just aren’t hitting enough to be seen, or are wrong entirely. If the shape is not noticeable enough, try holding it for a frame or two longer if timing allows, then adjust from there. • Lip-flapping: Often the mouth movement is just too noticeable because you are posing out every shape to match the audio. As always, check yourself reading the whole line in the mirror to see if each shape needs to be seen or can be skipped over (or passed through more quickly) when talking just gets too fast. • Transitions: The mouth doesn’t simply move from pose to pose, but transitions through intermediary shapes to hit poses, especially when enunciating clearly or speaking slowly. In fact, many phonetic “shapes” require multiple poses, such as the “B” sound, with the mouth starting closed then opening to release the sound. Again, check your mirror or video reference to find when the lips purse, open, or close between phonemes for best results.

192

(Courtesy of Sony Interactive Entertainment).

Interview: Marie Celaya Facial Animation Supervisor— Detroit: Become Human

Can you please explain the process of producing facial animation on Detroit: Become Human and your role on the project? Facial animation at Quantic dream is a highly particular process, as our ambition is to truly recreate our actors’ appearance and performance, exactly as it occurred on set. We are storytellers first and foremost and our aim is to immerse the player and draw them fully “into” the experience. We animators have a saying here—“The best animation is the one you don’t notice.” Animations must be credible, believable, and so elegant as to fully render the real thing. Detroit is a production-heavy title with its branching storyline necessitating vast numbers of facial performances. Did you arrive at any formulas that you would apply to all performances as a starting pass in order to quickly raise the quality of the performance capture before improving further? We make use of a specialized tool that identifies a gallery of key poses and expressions. Having identified such an expression (by registering an offset of markers between a neutral and smiling lip/eye differential), the refinements I made to the actor’s initial FACS recordings are automatically applied. This smooths out any anomalies in data capture and provides a first phase of refinement before I finalize the details by hand. 193

Interview: Marie Celaya This saves us a considerable amount of time, though there is an art to having the tool identify the right expressions and apply the right weight of correction. What is your attitude toward technology vs art in your approach to facial performance? How much should studios be focusing on pipelines that capture high-quality facial performance vs one that instead frees up time for animators to polish more once they receive the motion capture? Art and technology are the mother and father of our work. Given the volume of data we handle, small improvements in efficiency scale up to enormous effect. That’s why we’ve perfected the data capture process as much as possible, minimizing the time needed to refine by making improvements at the head of the work stream. Some refinement is and always will be necessary, but that’s the price you pay if you want true verisimilitude. The studio understands and reflects this focus on technology as a means of freeing up animators’ time. But the company also understands that no tool will ever be totally perfect; even the cleanest possible data will require an artistic touch that must be human because it is truly a subjective enhancement. We often add REM (rapid eye movements entailing tiny darts of look or attention) or other microexpressions that are essential to make a performance feel real, even if most people wouldn’t consciously notice them. Whether to add such expressions and how many is a question of camera angle and proximity, et cetera, but we are always looking for ways to make such humanizing enhancements. Some of the best performances have no words spoken. What do you find most helps sell a character’s internal thought process when a performance features no dialogue? The eyes are the key. Truly minuscule details can make the difference between a disconcerting performance and an entirely absorbing one. I mentioned that we add darting eyes: other examples include half-blinks, flickers of attention, the direction of a look, the length and frequency of blinking, even whether and when a character’s gaze moves to an interlocutor or not. All contribute to an expression of realness and emotional depth. Most players are not conscious of these minuscule details, but the presence and quality of such details is what separates a moving performance from a distracting or uncanny one. What in your experience is the single most important element to focus on when polishing a captured facial performance or animating one from scratch, and where might others misplace this effort? It is essential that characters look at each other during dialogues. Not all the time and not continuously (at least, not in all cases) but when and where the performance dictates it. Again, this is partly a subjective finesse, which is where the artistic consideration comes fully into play. Eyes carry the weight of a performance. They are the window to the soul and actors use them to indicate the interior life of a character. Darting looks are critical not only for realism but to convey emotional depth. As with any animation, they must render the performance faithfully and be created efficiently. For realism, “less is more.” Adding too many emotive details and quirks will result in something cartoony. By the same token, an almost-faithful rendition of a performance will drop you into the “uncanny valley,” which is why an excellent rig is required for treating performance data. This is where I must provide controllers and constraints to the animators and personally review every detail. Consistency in the animation approach is a huge plus. Wherever possible, I used the same animator for a character’s repertoire of scenes. Production constraints meant this wasn’t always the most effective way of working, but wherever a particular animator had a certain passion for Kara, Connor, or Markus I would assign

194

Interview: Marie Celaya them the character’s signature shots. This meant their work was a labor of love but also that they intuitively understood the refinements needed for the best results. Everybody must be given a chance to shine. I take the view that every animator has the potential to contribute something truly artistic and meaningful to the project. That means whether they are a senior, a junior, long-serving or a newcomer, everybody must have a high-value sequence (e.g., a close-up dialogue) as an opportunity to show what they can bring to the table. This is the best thing for the individual, for team cohesion, and for the project as a whole.

195

Chapter 11

Our Project: Motion Capture Arguably, the single largest innovation in game animation in the last few decades has been the widespread adoption of motion capture (mocap for short)—the process of capturing the motion of live actors. Much was said about mocap in the early days along the lines of “It’s cheating,” “It’s not real animation,” and “It’ll replace animators and we’ll lose our jobs,” but one must only look a decade earlier to see the same fears vocalized by animators regarding the shift from 2D traditional animation to 3D computer animation. The idea then that a computer character could have the same life as a series of artfully hand-drawn images was incomprehensible to many at the time, and the same is true now of mocap. The simple fact is that as video games have matured and their subject matter moves from simply cartoonlike characters and actions to more realistic renderings of human characters and worlds, the old approach of keyframing humans simply wasn’t cutting it visually, not to mention the sheer volume of animation required for a fluidly moving character with all its cycles, blends, and transitions would simply be impossible to create any other way.

197

Game Anim: Video Game Animation Explained

Gameplay mocap actors must often be very physical. (Courtesy of Jay Britton & Audiomotion.) That’s not to say that mocap can’t be a crutch, and when wielded incorrectly, the results are far from satisfying. There are some in production still who incorrectly believe that we shoot mocap then implement in the game and the job is done. A search for a “silver-bullet,” one-stop solution to capturing the subtleties of acting is ongoing, though this technology is merely a tool for a talented animator to wield in the quest to bring characters to life. Mocap is but another method to get you where you want more quickly, and the real magic comes when a talented animator reworks and improves the movement afterward.

Do You Even Need Mocap? For our hypothetical project, long before getting into the nitty-gritty of motion-captureproduction, a very important question to be asked should be whether to use motion capture or stick with a traditional keyframing approach. Here are some considerations to help answer these questions. 1. What is the visual style of the game? A more realistic style benefits greatly from mocap, whereas mocap on highly stylized characters can look incorrect. Cartoony and exaggerated motion will help sell characters that are otherwise lacking in detail and visual fidelity, including those seen from afar. Mocap makes it easier to achieve realistic character motion. 2. Are our characters even humanoid? While some games have been known to mocap animals, the approach is typically used only for humans. If our main characters are nonhuman creatures or nonanthropomorphic objects, then mocap often isn’t even an option.

198

Our Project: Motion Capture 3. What kinds of motions will feature most in the game? If the characters are performing semi-realistic motions such as running, jumping, climbing, and so on, then mocap will suit, whereas if every move is expected to be outlandish or something that no human could perform, then keyframing might suit better. The balance of these actions should determine the project’s adoption of mocap. 4. What is the scope of the game? Mocap gives the best value for the money when used on large projects with lots of motion, at which point the production cost of setting up a mocap shoot and the required pipeline is offset against the speed at which large quantities of character motion can be created. That said, cheaper yet lower-quality solutions are also becoming more readily accessible for smaller projects. 5. Would the budget even cover it? While affording an unparalleled level of realism, motion-capture shoots can be expensive. When compared to the cost of hiring additional animators to achieve the same quantity of motions via keyframe (depending on volume), the cost sometimes becomes more comparable. 6. What is the experience of the team? An animation team built over the years to create stylized cartoony games may take issue with having to re-learn their craft, and attempts to adopt mocap may meet resistance. That said, motion capture does become a great way to maintain a consistent style and standard across animators.

How Mocap Works While not absolutely necessary for great results, an understanding of the mocap process will only aid the game animator in finding ways to speed up the pipeline and get motion capture in the game faster and to a higher quality in as little time as possible.

Different Mocap Methods Optical Marker-Based While there are several alternative motion-capture methods, the traditional and most commonly used is via the triangulation of optical markers on a performer’s suit, captured by arrays of cameras arranged around a stage so that they create a “volume” within which the performance can be recorded. This provides the highest quality of motion capture but is also the most expensive. These arrays of cameras can number anywhere between 4 to upward of 36, and highly reflective markers are tracked at higher frame rates than required by the game project (typically 120 frames per second). As long as no fewer than three cameras can simultaneously follow a marker, the software simulation model will not lose or confuse the markers for one another. When this does happen, the clean-up team (usually provided by the stage) will manually sort them again.

199

Game Anim: Video Game Animation Explained

Optical camera-based mocap is the standard for AAA development.

Accelerometer Suits The performer dons a suit with accelerometers attached, which, when combined with a simulated model of human physics and behavior, provide data without the need for a volume of cameras. However, unless the animation team is prepared to work longer with the data, the results are far from the professional quality provided by marker capture. Accelerometer mocap is therefore useful for lower-budget projects or previz before real captures for larger ones.

Depth Cameras A third and experimental approach is to use depth-sensing cameras with no markers, applying body motion only to a physical model. This provides the cheapest option of all, and has created interesting results for art installations that deal with more abstract representations of the body. Depth cameras may provide some decent reference gathering and previz but are ultimately

Microsoft’s Kinect has made basic mocap available to the masses. (Courtesy of Richard Boylan.) 200

Our Project: Motion Capture less than ideal for an actual videogame project due to the amount of work still required to make it visually appealing post-shoot. That said, the quality of all options is increasing at an encouraging rate.

Performance Capture Perhaps the biggest breakthrough in increasing acting quality in recent years has been the introduction of performance capture. While motion capture refers to the recording of only the body, performance capture records the body, face, and voice all at once by using head-mounted cameras and microphones. Doing so adds a level of continuity in subtle facial acting that was simply impossible in the previous method of recording everything separately and recombining in a DCC. While this method has become ubiquitous with cinematic cutscene shoots and head-cams are becoming more affordable, care must be taken to ensure all three tracks (body, face, and audio) remain in sync. As such, the mocap stage will generally provide time codes for each take, which must be

Real-time face capture in Unreal Engine 4 via head-mounted camera. (Copyright 2018 Epic Games, Inc.) maintained during the assembly and editing phase. While it used to be a requirement for real and virtual actors’ faces to match as much as possible in order to best retarget the motion to the facial muscle structure, new methods are being employed to retarget to digital doubles first, then translate to the desired face of your chosen video game protagonist, though the extra step naturally makes this process more costly at the benefit of freeing you up to cast the best actors regardless of how they look. Due to the extra overhead of camera setup and calibration, cinematic shoots typically go much slower than the often rapid-fire process of in-game shoots, not to mention the reliance on scripts and rehearsals and multiple takes to get the action just right. While rehearsals can certainly benefit in-game shoots (most notably when requiring choreography such as during combat), they are less of an absolute necessity as is the case with cinematic cutscenes. 201

Game Anim: Video Game Animation Explained The secret to successful cinematics isn’t a technology question, but ensuring the performance (and writing) are as good as possible. There’s only so much an animator can polish a flatly delivered line of awkward dialogue from a wrongly cast actor. For the remainder of this chapter, we’ll be focusing on the first method, as this is the most commonly used method at video game studios and therefore the most likely an animator will encounter in their career.

The Typical Mocap Pipeline The typical workflow for optical marker-based mocap involves: 1. The actor arrives and, once suited up, is calibrated into the system matching their height and size (and therefore unique marker positions) with a character in the capture software, with dimensions provided by the game project character. 2. The actor is directed on a stage to capture the desired motion. Either then, or later via a viewing software, the director decides upon which takes (and sometimes frame ranges) they wish to purchase from the stage. See the “Directing Actors” section later in this chapter for some best practices when directing actors. 3. The stage crew then clean-up the motion by fixing lost markers and smoothing out extraneous jerky motion due to marker interference. This process can take anywhere from a few hours to a few weeks. The cleaned-up motion is delivered to the game studio as “takes.” 4. A technical animator at the game studio retargets the delivered data onto their in-game character, checking the quality of the delivered mocap and requesting redeliveries if the quality is off. (See the section on “Mocap Retargeting” below for more on this step.) 5. The animators then begin working on the mocap that is now driving their game characters. This usually consists of a mocap rig and control rig both driving the export skeleton, allowing the animator to trace motion back and forth and work in a non-destructive manner, adding exaggeration and appeal without destroying the underlying motion that has been bought and paid for. For details on this stage, the most involving step for the animator, see the section “Working with Mocap” at the end of this chapter.

Mocap Retargeting Because actors rarely match the dimensions of the video game characters they are portraying, the studio must incorporate retargeting of the motion from the delivered actor-sized motion to the game character. There are a variety of settings to get the best possible translation between characters without issues that are difficult to fix at a later stage with redoing the retargeting process.

202

Our Project: Motion Capture This process is generally performed in MotionBuilder, and the single biggest potential issue to be wary of is the use of “reach” to match limbs’ captured positions regardless of the difference between the source actor and game character. Used generally for the feet to ensure they match the ground and prevent foot-sliding, reach can also be used for hands when it is essential they match the source position, such as when interacting with the environment like grabbing onto a ladder. However, leaving reach on for arms in general can be disastrous, as the hands will always match, causing the arms to bend or hyperextend unnaturally to maintain the source position. At this stage, the person tasked with retargeting should also keep an eye out for individual bad retargets or jerky motion where lost mocap markers weren’t correctly cleaned-up, systemic issues that plague every delivered motion such as bent clavicles or spines, or loss of fine detail due to smoothing applied as default to all motions.

Mocap Shoot Planning The absolute worst thing an animator can do is arrive on the day unprepared. Here are several essential practices that will ensure as smooth and productive a shoot as possible.

Shot List A shot list is invaluable both on the day of shooting and in the run-up to the shoot, as it’s the single best way to evaluate everything you’ll need to capture, so it helps plan out the shoot. While the mocap stage will often provide their own formatting as they require a copy before the shoot day for their own preparation purposes, you can make a start yourself within Excel or Google Docs. Any shot list should contain these following columns: • Number: Helps to count the number of shots and therefore make a time estimate. • Name: Shots should be named as per your file-naming convention. • Description: A brief explanation of the desired action lest you forget. • Props: Which props will be required by the actors—even if not captured. • Character: Essential for multicharacter shots for retargeting purposes. • Set builds: Similar to props but rather walls, doors, and so on actors will be interacting with. • Notes: Added on the day as required, describing directing notes such as preferred takes.

203

Game Anim: Video Game Animation Explained

A typical shot list for planning a motion-capture shoot.

Ordering/Grouping Your Shots There are a variety of factors that will determine the ordering of mocap shots, not least being the priority in which they’re needed for the game schedule to ensure work isn’t delayed should you fail to capture everything, which happens often. In addition, grouping multiple actions requiring the same set builds and props (especially if the props are captured) will ensure a good flow onstage. Perhaps the largest time sink on any shoot day is the building of sets, so great consideration must be taken to try to capture everything required on one set build before it’s dismantled. It is wise to avoid capturing high-energy actions at the end of the day (or even right after lunch), as the actors will naturally be more tired then. Conversely, start the day with something fun like fast and easy rapidfire actions that will build momentum and set you up for a great day of shooting.

Rehearsals The most proven way of having an efficient mocap shoot while obtaining the highest quality of acting is rehearsing beforehand. Not only is it a great way to build a relationship with the actors, but it also allows them to go deeper into any performance. While not needed for many in-game actions that can be quickly done on the day, rehearsing is essential for cinematic or story shoots that require deeper characterization and will likely have closer cameras that can highlight detail, especially for full facial performance capture. Something as subtle as a thoughtful pause or a tilt of the head can make the world of difference to a performance when an actor is fully invested. Giving them time to know and understand the characters they’re portraying is impossible without rehearsing. 204

Our Project: Motion Capture Mocap Previz Another excellent way to avoid costly wasted time on set (or afterward when you find the actions just won’t fit the scene) is to previsualize the actions before even going to the shoot. This way, not only can progress be made on the various gameplay and story scenarios without having to wait for shoot day to roll around, you’ll already have figured out many of the technical issues from within simple cost-effective scenes, such as how much of the set you’ll actually need to build based on touchpoints. Importantly, understand that the actors will likely improvise and suggest much better actions than your previz on the day of shooting, so it should be used as a guide and initial inspiration only, giving the director a better evaluation of what will and will not cause technical issues when new suggestions arise.

Previz of a mocap shoot reduces guesswork regarding staging and sets.

Working with Actors While it can be physically exhausting to shoot mocap, sometimes over a series of days, working directly with actors can be one of the most rewarding aspects of video game animation as you work together to bring your characters and scenes to life. The improvisational collaboration, sometimes to an intimate level as you work together to create the perfect performance, can be a refreshing break from working at a computer screen.

Casting Before starting shooting, the correct casting can make or break the quality of your game, especially in a narrative-heavy project. Unless the studio has an in-house mocap facility, game animators often travel for mocap, so booking nonlocal talent (actors) makes the logistics difficult. Requiring regular access affects any decision in casting local vs distant talent, often resulting in many game studios adopting a hybrid approach where performance actors are used only for cinematic shoots and voiceover, with the majority of gameplay actions being performed by other more easily available actors and stunt performers. 205

Game Anim: Video Game Animation Explained

Watch_Dogs built their character to the exact measurements of the stuntman for better retargeting. (Copyright 2014 Ubisoft Entertainment. All Rights Reserved. Watch Dogs, Ubisoft, and the Ubisoft logo are registered or unregistered trademarks of Ubisoft Entertainment in the U.S. and/or other countries.) While every cinematic game character will require different traits depending on the story, much more uniform are the requirements for in-game motions. As such, it is possible to break down the things you should be looking for when casting and auditioning (if the budget allows). • How well do they take direction? A key interaction between the director and the actor is massaging the action toward the desired result. While it is as much down to the director to communicate effectively, if the actor requires too many takes to get it, doesn’t listen, or gets easily distracted, then shoots will run much less efficiently than otherwise. Like much of the team-oriented nature of game development, motion capture is very much about human relationships, so a good rapport with the actor is essential. • How imaginative are they? An actor is expected to bring much of the performance to the table themselves, only improving what the director had in mind. If their acting choices are always pedestrian and expected, then it may as well be the director in the mocap suit. Does the actor surprise and delight with how they read the lines, attacks the monster, or climbs over the obstacle? Actors should ideally be making suggestions for alternate takes, even more so as they grow to know the project and roles they’re playing. • How do they carry themselves? Entirely a question of physicality. It’s very difficult to change an actor’s natural walk or keep up with mistakes if they tend to always favor the wrong foot forward when returning to idle. Do they walk heavily when the character is supposed to be light of foot?

206

Our Project: Motion Capture Are they able to retain the game character’s idle pose without being reminded? • What are their measurements? Height will affect retargeting of the motions, and weight comes across in every movement. While it is possible to replay motion from a 5-foot-10 actor onto your 20-foot ogre, the closer you get to the required height and weight, the less time spent completely reworking the motion later. This is especially important when casting your lead character, as having them match their virtual character’s size as much as possible will reduce retargeting issues and touchpoints throughout the project. • Do they have the required skills? Martial arts, weapon handling, or dancing cannot just be learned on the spot. If your character requires a specific skill, then you must ensure that the actor already has the required training. Similarly, if they are also to play non-martial-arts-trained characters, can they “unlearn” it enough to fight regularly? • How physically able are they? If your game character is expected to be acrobatic and able to perform flips and somersaults, ensure that the actor is physically able to do that, at least at the beginning of the day before they are exhausted. If the character is supposed to be awkward and clumsy, can they recreate that kind of motion well?

Directing Actors So the day has been set and the shot list has been delivered to the mocap studio, but how do you get the best performance on the day and ensure you get through as much of the shot list as required with minimal takes? One common mistake many animators make when first directing actors is to mold their movements as one might a 3D animation, literally describing the actions in anatomical terms such as raising arms higher, crouching lower, moving faster, and so on. That is, however, the surest way to capture unnatural, mechanical movements as the actor struggles to remember and incorporate these modifications in each successive take. Direct by storytelling: Give as much for actors to work with as possible, such as who/when/where they currently are. Provide what just happened and where they’re going next. After the initial explanation of what the shot involves, actions must be improved by using verbs that keep the actor in the fantasy that was created. If you set up a run cycle as the character being chased by a dog and want them to run faster, then tell them they’re being gained on, or that it’s a pack of wolves. If you need them to crouch lower when moving, tell them they’re being shot at or ducking under a laser trap-wire. If a tired jog simply isn’t tired enough, then suggest they’re in the last mile of a marathon. Using scenarios over simple verbs is essential for directing any creative mind because not only does it avoid simply doling out the aforementioned technical instructions, but most importantly it leaves ownership of the final result with the actors. They are providing a better performance by getting even more in character and you are getting the desired result in as

207

Game Anim: Video Game Animation Explained natural a manner as possible. It’s even better if you can ask the actor how they think the character would react. While as director you always have the final say, it works so much better when you come up with something out of a conversation rather than one-sided instruction. (This goes for any collaboration on the game team as well.) Never criticize: One golden rule is that acting is a very personal art form and actors are putting themselves in a position of vulnerability to deliver you their performance. Never use negativity when describing why a take isn’t what you want—instead, always move in a positive direction such as, “That was great. Let’s do another take and incorporate this …,” or ideally, “If the situation were to change, how would [your character] react?.” Don’t feel the need to always have all the answers immediately. Again, it is a collaboration and keeping the actor in a state of always giving and bringing more to the performance is the key to avoiding dry, unimaginative performances. Single point of contact: Another sure-fire way to confuse an actor is to give multiple different directions. If you are discussing the take with another team member, make sure only one is giving the final decisions to the actor. It’s okay not to know what you want initially, but direction must be clear when it’s finally decided upon. Such discussions should take place away from the actor rather than becoming a three-way street, and usually involve one animator handling the performance while another ensures the technical or design considerations are catered to.

Props and Sets Props are an essential part of any game mocap shoot, usually taking the form of weapons or other gameplay-related items. Importantly, props give actors some tangible “business” (something to play with in their hands or touch in the environment) to work with in cinematic story scenes and make the scene more natural.

A typical weapon prop armory required for games. (Courtesy of Audiomotion.) 208

Our Project: Motion Capture Characters interacting with the sets they are in, such as leaning on tables or stepping up onto platforms, grounds the character in the video game environment much more than a performance taking place on an infinite flat plane. Sets require that communication and planning must take place between the animators, level artists, and designers to ensure at least important elements of the background are relatively locked down and do not change after the shoot.

Prop Recording Prop manipulation in cinematic scenes is an excellent way to give more visual interest to a scene and gives the actor something to play with, be it a cigarette to smoke, a keyboard to type on, or simply an inanimate object like a ball to pick up and handle while delivering lines. However, it does make an order of magnitude more work for the animators to deal with later, as contact points must be maintained and cleaned, proving especially difficult if the object is picked up and placed down repeatedly in the take or passed through the fingers. Some of this difficulty can be alleviated by recording the motion of the prop in the same manner as we record the actor by marking it up and entering it into the system, though this will require the prop to be retargeted on delivery of the mocap. As such, care should be taken to measure props for digital recreation so as to ensure the best retargeting on delivery of the mocap. On the flipside, however, giving an animator essentially two actors to manipulate can make a scene overly complex depending on what is required. For example, if the prop (such as a rifle) stays in the hands throughout, then its exact position and orientation are not required, or if it’s not shown in a cinematic close-up shot, fidelity is less important. In cases like these, it can be easier simply not to capture the prop and just use the provided video reference to place it back in the actor’s hands and keyframe it when it comes to be animated. Let’s look at the example of a pistol. If you are capturing a set of movements to create the pistol strafing system, you will likely rapidly record shots of the actor walking and running in various directions while holding a pistol. As the pistol is always held in the right hand and does not move much in each of these actions, it is easier not to record the pistol and simply attach it to the character’s hands (or rather, the weapon joint of the hand) on receiving the mocap. However, if you are capturing a complex reload, weapon switch, or other action that requires the prop to be moved around inside the hand or between hands, or be put back in a holster, then it is valuable to capture the pistol prop’s motion and retarget it, giving the animator all the data required to get the best possible fidelity.

Set Building Building sets is an essential part of any mocap shoot that requires the actor to interact with the environment. All motion capture facilities should have an 209

Game Anim: Video Game Animation Explained inventory full of pipes, ladders, and boxes of all sizes that can be combined to create the required sets. Importantly, only the touchpoints that an actor will interact with are important to build, with the rest falling to the actor’s imagination (aided by whatever direction and reference materials you provide). Some mocap stages provide the ability to visualize a virtual set in a real-time view, allowing the actors to get a better sense of the environment they’re currently in (provided your team has built it already, of course). This can be a distraction, however, as the actor may be caught looking at a screen while recording, so is best used only before a shoot for context. Relevant geographic locations such as distant mountains or buildings can just as easily be given to the actor as corners of the mocap stage or someone holding up a prop off-camera.

Building out sets to measure before a shoot. When planning to build sets, care must be taken to ensure measurements are relatively accurate for best possible results, such as chair and desk heights, for example, and distances between objects. This is where previz again comes in much more handy than storyboarding, as you are planning out the set in 3D and can measure relationships there. All this assumes the game and characters are built to real-world scale, with correct sizes worked out in the DCC (a recommended practice for scale consistency even non-mocap games will benefit from). Before attending the shoot, it is important to consider that large set pieces may occlude the cameras, so often walls and doors and so on must be built with mesh to allow as many cameras to triangulate the markers as possible. As such, the people best equipped and most experienced in set building will be the stage team. Make sure to communicate any complex sets with them as early as possible ahead of the shoot so they can plan accordingly, sometimes even building the set ahead of time. Set building is time-consuming on the day of shooting, so consider set complexity and frequency when estimating how many shots you hope to capture. 210

Our Project: Motion Capture

LA Noire’s set matches only the important touchpoints of the in-game boat. (Courtesy of Rockstar Games).

Being smart about shared set builds allows the day to go as efficiently as possible. For example, if there are three separate scenes or actions that require an actor to jump off a wall, then capture them out of order in the shot list to get them all within the one set. While it may help, there’s no need to arrange this in the shot list, as the stage crew will be used to jumping around the shot list for expediency, even adding new unplanned shots as long as all information is kept up to date. Entering the set build info into the shot list ahead of time will allow these shortcuts to be anticipated. Mocap sets are built primarily from various-sized boxes for solid platforms to stand on, with piping used to create scaffold frames that represent walls and other parts the actors must touch without requiring full walls that might occlude the markers. It is the stage crew’s responsibility to ensure the sets are built safely, so it is recommended to leave set building to the professionals beyond helping to clear away any minor parts as sets are deconstructed. Unlike the rest of video game creation, motion capture is a very physical activity, so care must be taken to avoid injury even when picking up objects.

The more time you spend at a stage, the more familiar you’ll be with their inventory of props and set parts, so you will be able to plan ahead of time by knowing what sets can be made from. The most important element of set building is that you get the motion you want, so don’t be afraid to make requests of the stage crew if you feel the set isn’t solid enough for the correct impact or not high enough for the best fall.

Virtual Cameras Motion capture is always searching for ways to get the actors more invested in the scene or action they’re currently performing for the best possible 211

Game Anim: Video Game Animation Explained

Real-time feedback of sets and facial acting on HellBlade. (Copyright 2015 Ninja Theory Limited.) performance. Beyond showing background information to them via scripts, character profiles, concept art, and previz videos, one exciting way is to put the actor into the scene by way of the real-time connection between them and MotionBuilder, or, even better, the game engine. A step further allows screen-mounted virtual cameras to move through the volume (location tracked via markers), with the real-time render displayed on its screen. This allows the animator to see the actors perform in a similar manner to a live-action shoot. This can even take place at a later stage without the actors, where just a replay of their motions is combined with the real-time capture of the camera.

Getting the Best Take At the mocap stage, it can sometimes be difficult to decide which of several takes is the one you wish to work with back at the studio. Preparation can only go so far in protecting an animator from returning to the studio with an unworkable shot. Here are some points to keep in mind when shooting or deciding between multiple similar takes. • Remember key poses: If your character’s idle stands left foot forward, then always keep an eye on the footing. Changing this back at the studio is one of the biggest time sinks that can be avoided with a little oversight. If the actor is struggling to remember, or if you just want to be sure yourself, provide the stage with a file of your character’s mesh exported in key poses to bring up on the real-time screen. For extra peace of mind, having the actor assume the pose and taping the ground where their feet lie helps (though ensure the actor doesn’t look at the ground when returning to the pose—it should be for the director’s reference only).

212

Our Project: Motion Capture • Mocap mirroring: If an actor generally prefers to jump up to a ledge with their stronger leg but it’s the opposite of what you need, remember you can always mirror animations later. If the action is asymmetrical (e.g., the gun must be in the right hand), consider carrying the gun in the left for this shot only and starting/ending in a mirror of the idle pose. Better actions will always be done by whatever the actor is most comfortable with so don’t force something, especially a large action, in a way that the actor’s body won’t perform naturally. • Weigh the amount of rework: When several takes all look equally good, go for the one that will require the smallest amount of work back at the studio. As a general rule, it is easier to remove motion than to add it. For example, choose a take where an actor walks a little longer than necessary rather than less. If an actor shuffles their feet at the end of an action to hit the mark, choose instead the take where they confidently hit near the correct pose. Changing the final pose is easier than removing the foot-stepping. With more practice, you’ll eventually make decisions on the stage to move onto the next action without too many retakes, confident in the knowledge that you can work the action captured to make it perfect rather than waiting for that one perfect take. • If in doubt, get coverage: In a similar manner to getting different line reads when recording voice, “coverage” is a tried and tested way to increase the chances you’ll get what you want for a single action. That basically means trying a variety of different approaches to an action with variations of attitude and speed and so on. When you’re unsure of exactly what will look best, keeping takes loose like this gives you more options later when deciding on the one you’ll choose to work up. Sometimes you might even want the start of one take and the end of another—as long as it’s not too difficult to make the join, this is an entirely valid approach when choosing takes. • Watch the actors, not a screen!: A classic mistake when choosing takes, and when directing in general, is for the director to watch the performance onscreen rather than live on stage. Screens are fine for rewatching, but the initial performance should be viewed live on the stage in order to see subtleties not caught by video or the mocap played back in 3D. Eventually, it’ll come naturally what to look out for and where to focus your attention—just be sure to watch from the best angle so as not to miss anything.

Working With Mocap In a reverse from keyframing that begins with key poses and then is timed out before working on the in-betweens that flow between them, “raw” mocap at the untouched stage can be likened to the animator having first created all their in-betweens and now needing to add in key poses and timing.

213

Game Anim: Video Game Animation Explained There are two main methods to mocap editing, both with their pros and cons, but it is quite easy to choose the one you want depending on the action at hand. 1. The “destructive” approach: Manually removing keys so as to make the mocap more malleable for a human, lessening the key density allows animators to manipulate mocap as they would keyframes, shifting the timing of the now-reduced key poses as well as only having to worry about these poses when required to amplify and exaggerate them. This works for less-realistic styles that don’t require the realism afforded by mocap to be as strictly adhered to and is generally employed on gameplay-centric animations such as melee combat, for example, where fast response times and strong, identifiable poses are desired. However, this approach isn’t referred to as “destructive” for nothing, and in doing so we are destroying much of the detail that was sought and paid for in employing motion- capture in the first place. Once we’ve deleted those keys, they can’t come back, so this approach is best used in combination with the following only when confident in manipulating mocap while retaining its essence. 2. The “non-destructive” approach: Very rarely deleting keys, instead adjusting them as a group and working on top of them. Retiming mocap by scaling sections of keys and employing animation layers to paste adjustment poses onto the mocap underneath retains all of the realism and detail unique to mocap while still allowing the animator to exaggerate poses in the same manner as above minus the risks. Maintaining many layers can become complex, especially when counteranimating adjustment layers to iron out issues, so it’s best to stick to only one or two adjustment layers at most. This is the recommended method for mocap manipulation and the one most commonly used by video game studios, so we’ll continue exploring this method in more detail.

Retiming The first thing you’ll want to do with your mocap is to figure out the timing, which generally involves speeding it up. Sometimes a blanket increase is a good way to give actions more punch and start from there. Assuming a non-destructive method, this can quickly be achieved by scaling the motion universally in either the timeline or graph editor, though understand that keys will now no longer fall on full key intervals (not an issue if you plan to work on layers). However, any game animator worth their salt knows that simply scaling an action to speed it up is something only a designer would do. As such, while that might be a good base, the way you really bring an action to life is by emphasizing the important elements like a punch to make it faster while leaving elements like the ensuing follow-through with a longer delay. As such, another method is to scale different elements of the action individually by way of a time-warp curve that allows for custom speed adjustment across different parts of the action rather than a one-size-fits-all linear scale. A 214

Our Project: Motion Capture cleaner method is to keep the motion in a clip editor such as MotionBuilder’s Story Tool or Maya’s Time Editor and cut the clip up so as to overlap and blend it back onto itself. This method eliminates any pause or hesitation on the part of the actor while affording a good visual representation of where the edits were made. Should you opt for the destructive route, simply deleting keys and moving them on the timeline to increase or decrease spaces in-between will allow for fine-tuned scaling of every part of an action. This is more desired for actions that will be heavily modified timing-wise—just be careful not to remove too many frames, as some of that beautiful detail and overlap will be lost. As such, it’s recommended to keep more than just the extremities of key poses, also maintaining poses that deliver overlap such as an extra forearm bend after the shoulder has settled into a punch.

A mocap clip cut up for speed in MotionBuilder’s story editor.

Pose Exaggeration Exaggeration comes by way of modifying the underlying pose at certain points. In using real actors for our motion, we push for realism, warts and all. In real life, humans rarely hit appealing poses or dynamic silhouettes that work from all angles as required by video game animation. To overcome this, modifying key poses to amplify their readability and silhouette will greatly increase the motion’s appeal as well as aid any timing changes made at that earlier stage. If characters jump, make them jump higher and land harder. If they swing a sword, pull it further back before swinging and end in a strong posture. If the 215

Game Anim: Video Game Animation Explained

Before and after accentuating a motion capture performance via exaggeration, aiming for a clear line of action. actor pulled their punches, make sure your character connects deep inside the target with an equivalently painful reaction as the head snaps back with full force. Real life just isn’t real enough for a video game, so mocap should be treated as such. Of course, the extent to which you exaggerate poses (or timing) depends on how stylized the project is and how willing the style is to move away from the pure realism afforded by mocap in the first place. But even “realistic” games go to pains to exaggerate poses for readability and appeal. Even something as non-action-oriented as an idle loop can be immeasurably improved by pasting a more appealing pose onto it.

Offset Poses The process of modifying an action’s pose generally involves pasting an approved idle pose at the start and end of an action on an additive modification layer. However, the further the captured motion is from the standard pose, the more it will cause the entire action in-between to veer wildly from the original capture. For example, if the new pose is more leaning backward, then the character will lean backward throughout the action. Because of this, it is advisable to paste the new pose only just before the action begins, then paste a zero pose (where the offset value is zero so the motion reverts back to the original mocap) once the action begins. Then, once the action has played out, paste another zero pose just before the action returns to its ending idle position (where again we paste an offset pose). This is a good quick first pass, though it will likely cause side effects that can be overcome below.

216

Our Project: Motion Capture

Offset vs zero poses when manipulating mocap via layers.

Hiding Offset Pose Deltas Typically, once an offset pose is placed, it will be apparent to the viewer where the offset and zero poses occur as the character morphs between the new pose and the original, with foot-sliding and weight-shifting occurring across the delta between poses. A good trick to avoid foot-sliding here is to copy and paste the offset pose key from the start of the action to a point where the foot steps or performs another large action such as twisting on the ball of the foot. Once the foot moves to a new position, paste a zero pose so the movement masks the transition from the original position. Retain that zero-pose position until the last time the foot moves in the action, this time by pasting the zero pose before the final move, then copying and pasting the final offset pose from the end of the action to where the foot finally rests.

217

Game Anim: Video Game Animation Explained

Offset foot IK poses relative to the main body movement so as to eliminate foot-sliding. This method of pasting either offset or zero keys can be used for any part of the body (not just the feet) to better mask the blend between appealing offset poses and the original mocap. The trick is to have different body parts move between the zero pose and the offset pose at different times, ideally as the underlying animation moves also, so as to make the delta invisible. While referred to above as the zero pose, the adjustment pose used needn’t have a value of zero (therefore reverting back to the original mocap), but can instead be any offset value that allows for natural movement in-between the standard idle offsets bookending the entire action. Should the idle be too far offset from the underlying mocap, you’ll likely instead use a nonzero adjustment pose closer to the new modified pose while still allowing the mocap to play cleanly underneath.

Blending and Cycling Working with nonlinear actions in mocap is made significantly easier by using a clip sequencing tool (such as Maya’s Time Editor or MotionBuilder’s Story Tool) because of their ability to overlap and blend between different actions. Rather than having to animate the joins between two takes that more than likely do not match exactly, as long as two elements of the action are similar enough, we can overlay both actions at that point and simply blend across them such that the end of a cycle blends back to its start. 218

Our Project: Motion Capture

Two similar passing poses with a blend across clips in a sequencer. Importantly, when blending, it is important not to look for two poses than match, but instead look for two matching motions (such as a crouch, a hit, a turn, etc.). This allows the actions to be blended together across a moving action rather than a static one, further masking the transition with motion and making the clean-up work easier afterward. Blending across two actions at a relatively static portion (such as leaning against a wall), will leave visible artifacts as the actor shifts uniformly between the two different takes, as the poses are never identical. While this method is commonly used for joining two takes together, it is even more powerful as a tool to create seamless cycling actions—something commonly used in video games. This is made especially easy because cycles by their very nature feature repeating actions required for seamless blending (a walk repeats itself every second step, for example). For a walk cycle example to be blended, the motion must feature at least two matching steps, meaning at its very minimum three steps (L, R, L or R, L, R) to allow a blend over the third step back across to the first. The results of blending across a motion in something as important as a walk or run that will be seen by players repeatedly throughout the game are invaluable. This gives the best visual results, and starting with a seamlessly cycling action is a great first step before the long process of massaging the mocap into the looping cycle the player will eventually see. Attempting to create a similar walk cycle via a destructive approach by simply working into the join at the end will not only take much longer but likely require the deletion of keys from perhaps the last step of an entire run cycle, losing all that valuable motion and attitude in the process. 219

Game Anim: Video Game Animation Explained

Motion Matching Historically, game animations are exported and read merely as position and rotation values on bones, but this new approach parses data on momentum, velocity, and matching poses to offer automatic and visually flawless transitioning between motions by dynamically selecting the most appropriate section of mocap from a large databank at runtime. Motion matching, since its initial introduction to games by Ubisoft, has been perhaps the most exciting new technology on the horizon for video game animation for some time. Now shipping in sports titles and action games for the last few years, we have a better collective understanding of its strengths and weaknesses, and how a team might go about implementing motion matching into their game.

Motion matching is used extensively in EA Sports titles. (Courtesy of Electronic Arts.)

But first, a word of caution. Motion matching, for all its visual benefits, is a very complex system to manage—especially for player-controlled characters in gameplay-heavy games where accurate response times are tweakable to the frame. It is most effective for slower-paced games where realism is paramount, or for NPCs whose motions are less affected by frame-perfect balancing and tweaking. Importantly, it is not a catch-all solution for implementing realistic motion to an entire game, working best with traversal motions and even then requiring a lot of work to polish. Additionally, motion matching is being rapidly replaced (already!) with machine-learned motion that utilizes a similar reference to large datasets of motion but instead teaches the character to move rather than simply looking up the appropriate frame to play at runtime. Thankfully, from the animator’s perspective the shot list is identical. This new approach just places the onus on real-time calculations over storing lots of motion capture data—a win for memory budgets. 220

Our Project: Motion Capture Planning A Motion-Matching Mocap Shoot A common misconception is that motion matching reduces work on the back-end by shooting long takes of improvised motion, then simply throwing them into the motion-matching “black box” and the system magically figures everything out, but this couldn’t be further from the truth. If, as all quality projects require, you wish to art-direct and clean-up the motion capture postshoot, then a plan of the actions you require is essential so you can select and edit individual takes to add to the system, reducing the variability in which motions the system will use for any given action. If your dataset includes many versions of the same motion then it will be difficult to track down which version to edit, and once edited the system may then select a different one for use—which can be incredibly frustrating. Even worse, the system may select the start of one version and the end of another (of the same move), causing an unsightly blend in-between and dampening your motion. To combat this, even if shooting long takes, it is still important to select and isolate individual sections to be exported for inclusion in the data set. While it can be tempting to capture many motions in one long take, it is highly recommended to split motions up for review and retakes in order to reduce needing to reshoot when just one of several motions shot together doesn’t work out. This will also keep the energy high on shoot day as you group all similar motions together, lessening the need to stop and re-explain to the actor.

When shooting various direction changes the actor will tend to favor the leg they are most comfortable with and/or the leg that best fits the action. For example, a 90-degree right-turn while walking is most natural to plant and push off with the left leg. For a more rounded set of motions it is advisable to attain motions of their “off” leg as well. While it is also possible to mirror many motions, bear in mind that actions moving to or from the idle cannot be mirrored if the idle is asymmetrical (i.e., has a limp or is carrying a sword in one hand).

The Motion-Matching Shot List Below is a non-exhaustive list of actions that must be captured for basic ground movement of a third-person action game, including strafing. While not all are necessary, the more motions you feed into a motion-matching system, the better your results will be as more eventualities are covered. Shots are grouped by similar motions. Each set of motions has an accompanying diagram, or “Dance Card,” to help explain the motion to your performer. Not every performer, even with extensive mocap experience, will be able to follow and retain such exact technical instructions so if you intend on investing in motion matching for your project consider including some motion-matching-like directions during the casting process. 221

Game Anim: Video Game Animation Explained Naming Convention Below examples use the following naming convention for legibility: • • • • • • • • • •

fw—forward bw—backward lt—left rt—right fr—front-right fl—front-left br—back-right bl—back-left to—signifies a transition 045–180—turning angle in degrees

Core Idles and Movement The standard base motions to begin with. It is worth establishing these early as they’ll inform the rest of the set and allow for the actor to get into character before the technical complexity. idle walk-fw run-fw

Directional Starts and Stops While the actor must turn to face the direction of movement, ensure they both start and stop in the forward-facing direction (ideally facing a wall at the mocap studio you’ve established as the “front”). Capturing starts and stops in pairs like this ensures all the relevant acceleration and deceleration motion is included—the distance traveled to be determined by both your volume and whatever allows the actor to achieve a constant speed for a few steps. The directional stops are especially useful for NPCs aiming to stop on a spot while facing the correct direction. idle-to-walk-fw-to-stop-fw idle-to-walk-fr-to-stop-fw idle-to-walk-rt-to-stop-fw idle-to-walk-br-to-stop-fw idle-to-walk-bw-to-stop-fw idle-to-walk-bl-to-stop-fw idle-to-walk-lt-to-stop-fw idle-to-walk-fl-to-stop-fw idle-to-run-fw-to-stop-fw idle-to-run-fr-to-stop-fw idle-to-run-rt-to-stop-fw idle-to-run-br-to-stop-fw idle-to-run-bw-to-stop-fw idle-to-run-bl-to-stop-fw idle-to-run-lt-to-stop-fw idle-to-run-fl-to-stop-fw 222

Our Project: Motion Capture Pivot Turns Covers all sharp “plant-and-turns” to the right (assuming mirroring to the left) while in motion. Again, it is important to include enough steps before and after each action to cover all acceleration and deceleration. This is a key area that will require both favored and unfavored leg versions if you wish to capture all possibilities and provide the most fluid motion during traversal. walk-​fw-to​-turn​-rt-0​45-to​-walk​-fw-t​o-tur​n-lt-​045 walk-​fw-to​-turn​-rt-0​90-to​-walk​-fw-t​o-tur​n-lt-​090 walk-​fw-to​-turn​-rt-1​35-to​-walk​-fw-t​o-tur​n-lt-​135 walk-​fw-to​-turn​-rt-1​80-to​-walk​-fw-t​o-tur​n-lt-​180 run-f​w-to-​turn-​rt-04​5-to-​run-f​w-to-​turn-​lt-04​5 run-f​w-to-​turn-​rt-09​0-to-​run-f​w-to-​turn-​lt-09​0 run-f​w-to-​turn-​rt-13​5-to-​run-f​w-to-​turn-​lt-13​5 run-f​w-to-​turn-​rt-18​0-to-​run-f​w-to-​turn-​lt-18​0

Strafe Direction Changes All strafing directional changes can be captured by moving to and from a center point to all the compass-point directions while maintaining a forward-facing orientation. For each direction, you will need to capture at least three stages in sequence (i.e., walk forward, to walk backward, to walk forward) as the first walk begins from idle so doesn’t produce the correct transition. Ensure enough of the constant directional movement (without acceleration or deceleration) is captured to create a cycle in that direction (minimum three steps). walk-fw-to-walk-bw-to-walk-fw walk-fr-to-walk-bl-to-walk-fr walk-rt-to-walk-lt-to-walk-rt walk-br-to-walk-fl-to-walk-br run-fw-to-run-bw-to-run-fw run-fr-to-run-bl-to-run-fr run-rt-to-run-lt-to-run-rt run-br-to-run-fl-to-run-br 223

Game Anim: Video Game Animation Explained Strafe Diamonds and Squares These square and diamond shapes efficiently catch all the 90-degree “plantand-turn” direction changes while strafing. Be sure to capture one additional “base” (to use baseball nomenclature) past the starting point as the initial starting does not provide a direction-change. walk-square-lt walk-square-rt walk-diamond-lt walk-diamond-rt run-square-lt run-square-rt run-diamond-lt run-diamond-rt

Strafe Starts and Stops Starts and stops for directional strafing. Uses compass points similar to the direction changes, only the actor must come to a complete stop and pause for a beat before changing direction. idle-​to-wa​lk-fw​-to-s​top-f​w-to-​walk-​bw-to​-stop​-fw idle-​to-wa​lk-fr​-to-s​top-f​w-to-​walk-​bl-to​-stop​-fw idle-​to-wa​lk-rt​-to-s​top-f​w-to-​walk-​lt-to​-stop​-fw idle-​to-wa​lk-br​-to-s​top-f​w-to-​walk-​fl-to​-stop​-fw idle-​to-ru​n-fw-​to-st​op-fw​-to-r​un-bw​-to-s​top-f​w idle-​to-ru​n-fr-​to-st​op-fw​-to-r​un-bl​-to-s​top-f​w idle-​to-ru​n-rt-​to-st​op-fw​-to-r​un-lt​-to-s​top-f​w idle-​to-ru​n-br-​to-st​op-fw​-to-r​un-fl​-to-s​top-f​w

224

Our Project: Motion Capture Turn on the Spot Basic turning on the spot as per regular state-based motion systems. Ensure the actor comes to a complete stop before turning back by asking them to wait a beat. Notably, motion-matching systems may not cover parametric blending, (such that in-between angles like 120 degrees could be achieved by blending together the 090 and 135) so some degree of rotation under the character may be required in lieu of blending. idle-fw-to-turn-rt-045-to-turn-lt-045 idle-fw-to-turn-rt-090-to-turn-lt-090 idle-fw-to-turn-rt-135-to-turn-lt-135 idle-fw-to-turn-rt-180-to-turn-lt-180

Repositions Essential short and long single-step motions to further aid NPCs in hitting their desired waypoint marks. On player characters they can be confused with the directional starts and stops already captured, so best retained for NPCs only, where the desired outcome is predictable. idle-​fw-to​-step​-shor​t-fw-​to-st​ep-sh​ort-b​w idle-​fw-to​-step​-shor​t-fr-​to-st​ep-sh​ort-b​l idle-​fw-to​-step​-shor​t-rt-​to-st​ep-sh​ort-l​t idle-​fw-to​-step​-shor​t-br-​to-st​ep-sh​ort-f​l idle-fw-to-step-long-fw-to-step-long-bw idle-fw-to-step-long-fr-to-step-long-bl idle-fw-to-step-long-rt-to-step-long-lt idle-fw-to-step-long-br-to-step-long-fl

225

Game Anim: Video Game Animation Explained Turning Circles Covers the banking of the character as they lean into turns of various angles. Begin with a brief forward motion before turning to capture the moment of initially leaning into the turn, followed by enough turning to create a cycle. walk-fw-to-turn-rt-circle-wide walk-fw-to-turn-lt-circle-wide walk-fw-to-turn-rt-circle-tight walk-fw-to-turn-lt-circle-tight run-fw-to-turn-rt-circle-wide run-fw-to-turn-lt-circle-wide run-fw-to-turn-rt-circle-tight run-fw-to-turn-lt-circle-tight

Snaking Similar to turning circles, this specific motion records the weight-shift during changes of direction from left to right. walk-fw-snaking run-fw-snaking

226

Our Project: Motion Capture Wild Takes If you still have time on shoot day, recording wild takes will give you something to play with and reference later to see how close your system replicates the true motion of the actor. These long takes of improvised motions can produce more natural responses if you call out to the actor midtake, offering up more sporadic and reactionary movement—similar to that which the player will command. It is inadvisable to enter wild takes into the full system beyond just an initial import for testing as they will likely duplicate other moves captured individually. walk-wild run-wild

227

(Courtesy of Sony Interactive Entertainment.)

Interview: Bruno Velazquez Animation Director—God of War Revitalizing the style of such a well-known character as Kratos after working with him for over a decade must have been a fun and challenging experience. Were there any animation rules or dos-and-don’ts you built up for him over the years? Over the course of seven games, we kept an internal document called “Kratos Rules” that the team adhered to for all aspects of realizing Kratos. Everything from the way he looked to the choices he would make and of course the way that he would move and be animated. These rules included such things as “Kratos always maintains eye contact with the enemy unless there is a major rotation; otherwise he is dialed in and focused on the enemy at hand.” The hit-frame should personify this idea so his hits feel deliberate and final. For the new God of War, we looked over each rule and revised them accordingly to reflect an older, more measured Kratos now responsible for raising his son Atreus. For example, “Kratos never runs away or takes a 229

Interview: Bruno Velazquez step back from the enemy. He is always moving forward or making the player feel like he is about to move forward.” While this is still true in the new game, we added the following to the rule: “When forced to fight, Kratos will advance and maintain a smart dominance of the situation. However, he is older and wiser and attempting to turn away from violence as a solution. He wants to change to show his son there is a better way to resolve a problem.” This was the first entry to the series that heavily utilized motion capture. What were some of the key challenges of shifting the team over from a keyframe approach, and how did you maintain a consistent style between the humans and creatures? At the beginning of the project I was concerned about getting a team primarily built for keyframe animation to transition to a motion-capture-heavy project. The way that it ultimately worked out was to get the whole team to jump head-first into the process. We purchased a motion capture suit that did not require cameras or a complicated setup and encouraged the animation team members to try it themselves. We spent many hours taking turns at wearing the suit and recorded as much data as we could. Once the animators got to apply their own motion to our various characters, they were able to quickly get past any misconceptions and concerns about the technology and became more comfortable with it. This process emphasized the use of motion capture as one more tool available to us and by experiencing the process themselves from beginning to end it helped them to accept its use. In the end, we did hire professional actors and stunt performers to capture the motion, but having the animators understand the process from beginning to end was a big step for us. Once we got comfortable using more realistic animation data we were able to determine how much we could push our fully keyframed characters. We always used Kratos’s and Atreus’s motions as a barometer to keep us consistent. You state that achieving the balance between fun and responsive gameplay and natural movement is one of your key goals. Do gameplay requirements ever conflict with animation and how do you typically resolve them? Gameplay requirements always seem to affect and conflict with our inner animation desires. However, our team understands that gameplay is king. This is why we worked very closely with the combat designers who are responsible for building Kratos, Atreus, bosses, and all enemy AI characters. As we figure out the gameplay and identify the timing of motions that will serve the gameplay best, we identify data that includes hit-frame numbers and the total amount of frames a combat move needs to feel responsive. The animation team then finds creative solutions to maximize the use of keyframes available by focusing on the spacing of our poses. A good example of this is the motion of the Blades of Chaos, which are keyframed frame by frame because the spacing and positions of each frame had to work for gameplay as well as being smooth enough so the trail effects wouldn’t look linear. When working on the cinematic kills where Kratos dispatches his enemies, however, the animation team had much more freedom to let loose with the animation timing. This allowed for the animation team to reach a nice balance with the gameplay team. A notable visual style choice in God of War was a complete omission of camera cuts in cinematics and action scenes. What was the reasoning behind this decision and what were the related challenges?

230

Interview: Bruno Velazquez Early on in the project we decided that we wanted to make this God of War more grounded and unflinching. Our goal was for the player to feel like they are right there next to Kratos and Atreus, along for the entire duration of the adventure by only focusing on their perspective. Because of this, we decided not to cut the camera to show a passage of time or a glimpse of what “the bad guy” was doing in his lair. This was all in aid of a more personal story, so not cutting helped us to make it feel more realistic and visceral than previous games. We quickly discovered that it was going to be very challenging to shoot scenes, and planning was critical. The moment that things fell into place was when we changed our approach from shooting like a film to instead producing more of a stage play. Once the actors adapted to approaching scenes like this, it really made a big difference. We would spend at least one day rehearsing before shooting so the actors could do a tableread together, absorb all the previsualization we had prepared, and rehearse the staging and movement with the virtual camera. In the end, it all worked out as a well-coordinated dance between our actors and cinematographer.

231

Chapter 12

Our Project: Animation Team Management Scheduling With game projects often spanning multiple years and requiring hundreds if not thousands of animations to be created, not to mention a whole team jointly working simultaneously to bring the whole game together, it is imperative to have a good understanding of the creation of an effective game animation schedule. This chapter features a number of best practices and things to consider when building a schedule, with the ultimate goal being to deliver the best animation quality within the finite time of the development schedule.

233

Game Anim: Video Game Animation Explained Front-Loading In an effort to avoid an overload of work at the end of the project, it is worthwhile to aim to produce as much as you can at the start. The obvious catch with this is that game development is such an unpredictable beast that it’s hard to know what will stay and what will be cut. As such, teams may sometimes wait until certain elements are locked down before moving onto the next stage, not starting cutscenes, for example, until the entire script has been approved. Front-loading eschews this for an approach that essentially accepts that things will be cut or changed, and it’s easier to redo a certain percentage of work than it is to wait until everything is certain to not change (which never really exists) and only really start then. The traditional method of waiting forces all the work to be done toward the end of the project, which makes quality content almost impossible. As such, gameplay animations and cutscenes should be created with the understanding that a percentage will be redone—but that is video game animation.

Prioritizing Quality Prioritizing work is essential to ensure the animation team is working on what is required the most at any point in time, so prepare to be flexible on this throughout the project. A common pitfall working in a creative field is that creatives often want to work on the most enjoyable and satisfying work, but often that is not the most important work needing to be done first, with the groundwork for a game project often being laid with more mundane and standard tasks. That said, the highest priority in animating great games is the quality of both the assets and the overall experience. Leaving all the best parts such as “easter eggs” or polish to the end or, as is often the case, saying no to them when you’ve a mountain of work laid out until the end of the project will usually see these elements cut from the game at the end.

A Battlefield series easter egg. Non-essential but adds to the player’s enjoyment and can result in another round of press when discovered. (Courtesy of Electronic Arts.) 234

Our Project: Animation Team Management Saying no to too many great ideas in order to fit in a schedule results in a dull game, so it’s highly advisable to make sure to squeeze in little details and things players will notice and appreciate throughout the project even if it blows out the schedule (which usually just results in rescoping the quantity toward the end). That is where the magic of a great game really lies.

De-Risking De-risking unknowns is the fastest way to get a real handle on your schedule by eliminating or succeeding in tasks or aspects of the design that are the furthest from the team’s prior experience. This is more common in new IPs than sequels that already have an established way of working and general gameplay feature set. Due to this, previz and prototyping should be focused on these areas as early as possible, with the aim being to either prove that these elements will make it into the game or at least to establish how long the creation of such assets may take so that a more accurate schedule can be drawn up. If the schedule is simply not workable, then either a new approach must be investigated or the importance of the new gameplay scaled back so if it fails to materialize there is less chance of it adversely affecting the whole project. Unknowns can also come in the form of new technologies or pipelines that must be developed in order for the animation team to work. Many a game project has been sunk because a vital piece of technology the team was waiting on became available too late in production. While it may not be possible to know whether this risky new tech or approach will work in the early stages of development, a go/no-go deadline must be established, with backup plans for alternative tech or pipelines accounted for. In addition, at the start of the project, all the risk must be weighed and reduced if it adds up too much. For example, unknowns such a new IP + a newly formed team + a new game engine + an entirely new gameplay mechanic all compound the overall risk when at least one of them is guaranteed to become a major issue. While no ambitious game project is without its problems, too many problems can cause a project to fail, so they must be recognized and dealt with from the beginning. A conservative approach to scheduling and risk management can help avoid time sinks, but the flipside is that it may also stifle creativity. No great animation system was ever created without some risk of the unknown. Knowing when to attack early or hold off only comes from experience—just make sure your experience retains a good a dose of youthful naïveté and a cavalier approach to exciting new ideas.

Predicting Cuts and Changes The single best approach to prioritization one can develop in order to minimize wasted effort is to look ahead and evaluate all the potential issues 235

Game Anim: Video Game Animation Explained that might arise with each individual system, section of a game, or complex cinematic. While front-loading work assumes a degree of cuts and changes, and previz and prototyping are a great way to de-risk potential hazards, simply becoming aware of the incoming work that is most likely to be cut and lowering its priority so you attack it as late as possible means you often avoid having to even deal with many of these tasks, as they’re often cut from the game before you spend any time on them. As an example, if a raft of cinematic scenes is to be created for a level over the coming months, but there’s an area of that level that is highly suspect— displaying “red-flag” elements that haven’t worked in past projects—do not create the scenes for that part of the level until last rather than working sequentially. There’s a high chance the other teams will either figure out the issues (saving you rework) or cut it entirely. Importantly, be sure to raise your concerns with the individuals working on that section to get conversations started to ideally fix the area or otherwise before you reach it.

Adaptive Schedules A schedule need only be stuck to until it no longer works. Reprioritizing to fit the ever-changing game means that the schedule must adapt. The point of a schedule in the first place is not, as is often mistakenly thought, to know exactly when things will be done on the game and when. Instead, the schedule should serve only as a constantly updating document to allow educated decisions to be made on the fly, as early as possible, as to how to deliver the best-quality game within the allotted time. For example, knowing that it takes two weeks to fully deliver a weapon set of animations, when there are only ten weeks before the milestone with six weapons remaining serves as a tool to potentially drop one weapon, speed up the process while reducing quality, or shift animators around to free up additional help to achieve the milestone, depending on its importance.

Moving asset quantities around to adapt to a living schedule. Knowing the incoming issues ten weeks out offers a variety of options that simply don’t exist when surprised by an incoming milestone with no time to adapt. 236

Our Project: Animation Team Management Conflicts and Dependencies

Animation Axe_Combat_Anims Sword_Combat_Anims Programming Combat_Set_Axe Combat_Set_Sword VFX Audio

Axe_Effects Axe_Sounds

Spear_Combat_Anims Combat_Set_Spear

Sword_Effects Sword_Sounds

36 ee k W

ee k W

W

ee k

35

34

33 W

ee k

32 W

ee k

31 ee k W

W

ee k

30

29 W

ee k

28 ee k W

W

ee k

26 W

ee k

25 ee k W

27

No game team discipline works on its own. Characters first need concept art. Rigging first needs a character. Animation needs the rigged character, and design needs it all put together to start making the character playable. If any one element of the chain fails to deliver their part on time, that can have a knock-on effect down the line.

Dagger_ Spear_Effects Spear_S

Typical dependencies related to the animation team. The animation schedule should take into account major dependencies they have with other disciplines and, importantly, communicate as quickly as possible when they become aware of impending issues with teams that are depending on them. It is also important to remain in continuous contact with teams upstream so as not to be too adversely affected should they slip up on their end. It’s better to overcommunicate than be caught unawares when that asset you were depending on hasn’t even been started because another team’s priorities shifted. In an ideal world this would never happen, but human nature means some things get missed, so best to protect your own team in this way. Should an unforeseen conflict arise, it can be mitigated by always having other work the team can continue with in the interim until the asset is delivered, allowing them to bounce around and make progress elsewhere in the schedule. This can be done on an individual basis, too, where a gameplay animator can often become stuck waiting for the animation’s implementation or other adjustments from a programmer or designer co-worker. It’s useful to have more than one DCC scene open at once, even, provided your computer can handle it, so you can immediately jump back and forth to avoid ever being stuck waiting.

Milestones Milestones, the mini-deadlines often created for the purpose of appraising the current state of the game’s schedule (often for an external publisher) can be a double-edged sword. Too far apart and they become meaningless, as it’s hard for a team to visualize what they need to do today in order to hit a target several months out. Too close together and they also become meaningless because the game is always in a state of panic to hit the next target. 237

Game Anim: Video Game Animation Explained The best deadlines for an animation team are practical and measurable goals such as the initial pass of a new character’s basic move set or implementing the raw mocap and camera pass for a certain section of the game’s cinematics—something that can be viewed by anyone on the team and established as being achieved or not. Milestones that require assets to be finished or polished are subjective and unnecessary outside of finishing the game or preparing the game to be shown outside the studio, such as for a demo or the capturing of a promotional video—something that should ideally only occur very infrequently.

External demos for conferences are often cited as the biggest time sink in any game project, as unfinished elements rush to appear finished or better than they would otherwise at this stage of development. However, when properly scheduled, demos and similar visual milestones can be a great galvanizer for the team, allowing them to coalesce on a vision that may otherwise meander, as everyone has their own idea of what finished animation looks like.

Teamwork Games are rarely animated alone (the big games, at least), so it’s likely that you’ll be working as part of a team while animating. The trinity of gameplay animation is most often the combination of animator, programmer, and designer. Similarly, character setup requires an animator, character artist, and rigger/technical animator. Environmental animations benefit from a combination of animator, level artist, and level designer. Cinematics most often bring together animator, character artist, and level artist. Put simply, no one element of a game can be created by an animator alone, so a desire to collaborate and an empathy for others’ requirements will allow the animator to be of maximum effectiveness in a team environment.

Collaboration Collaboration is the act of working with others to create something, and in games, that something is often that which has never been created before. As such, there’s no real wrong answer to any creative challenge, only an ever-improving result after much discussion and trial and error until the game ships. After an initial discussion of the task at hand involving all parties, each should return to their desk with an understanding of what is required to achieve a first stab at the goal. For example, it may often only require an animation and the programmer to implement an attack action into the engine to get it working, from which point the designer can iterate on the “feel” of the action. Ideally, the programmer will have exposed variables to the designer such that the designer can make modifications without the others’ involvement, so they 238

Our Project: Animation Team Management can each move onto the next task until another pass at the animation or change to the code. This iterative workflow is essential to allow a team of collaborators to work in lockstep and make as many adjustments in as short a timeframe as possible. Moreover, they can be working simultaneously with one another such that the animator should create a quick animation for the programmer to implement so the animator can continue improving and updating the visuals at the same time as the programmer and designer are collaborating. Being personable with those you work with will pay dividends over the course of a game’s development. When someone asks a favor of you (such as requesting a quick animation for prototyping), always remember that at some point, you will likely have need of something from that person to realize your own ideas. An entire team going the extra mile on every task can create amazing things, especially toward the end of the project when operating at full steam. Remember that everyone wants the game to be as good as possible, so on the rare occasion that another discipline’s work is negatively affecting your own, such as environment art in your cinematic scene that keeps changing and breaking the shots, the best way to solve these issues is communicating how both teams are depending on that area both looking great and working. Sometimes a slight camera adjustment to accommodate a better environment is worth it and doing so will increase the likelihood of requests being reciprocal. The hands-down best way to get someone on your side with regards to fulfilling an idea is first to show it to them. As stated earlier, a game animator has the unique ability to mock-up ideas via previz, so exciting teammates with something they can already see, improve, and embellish with their own ideas will almost always win them over to your side.

Leadership Assuming a leadership role on the animation team, be it a lead or a technical or artistic director requires a varying skill-set depending on the studio, but all have the prerequisite that the team must look to you to enable them to work on a daily basis. Many young animators wrongly believe the role of lead is somehow easier due to producing less work for the game overall, at least from their vantage point. Worse still, some animators’ desire to attain the role can come from wanting to be “the boss,” calling the shots on how every detail of the game’s animation is created. This could not be further from the truth, with the most effective leads assuming a subservient role to the team and expending all their efforts to ensure the team is best supported to create the best animation possible. Moreover, if the lead is the best animator in a team, then they are generally wasted in a lead role. An effective team makeup balances individuals 239

Game Anim: Video Game Animation Explained proficient in animation, technical knowhow, experience, and youthful enthusiasm, and hires should ideally be stronger in at least one of these areas than the lead. A great lead must only have general knowledge of all aspects of game animation creation, and, importantly, can view the project from a higher level than simply “does this individual asset look good?”—allowing the lead to see everything in context. A team, naturally, is made of people. Being strong in soft skills can be one of a good lead’s best assets, such as: • The ability to clearly communicate direction and give feedback • An awareness of potential issues before they arise, and how to avoid/ solve them • Answers to questions or, just as importantly, knowing how to obtain the answer • An empathy toward team members’ issues and an eagerness to listen • An innate ability to inspire and motivate the team by providing an example to follow

Once an animation team is operating efficiently, then and only then can leads find time to squeeze in some of their own animation tasks, though they must be careful not to take their eye off the team long enough that problems can creep back in.

A lead role under time constraints toward the end of a project can often feel like protecting the team from an encroaching workload, where every new animation request adds to an already intimidating schedule. While any leadership position should provide a buffer between high-level (potentially chaotic) decision-making and a team member’s day-to-day work, and animation schedules should ideally account for late additions, the desire to keep a team in the dark is never as fruitful as having them engaged. In situations like this, it is often tempting to fight against new additions, as the lead sees the planned polish time toward the end of the project dwindling, but ultimately, the lead’s main role is to lead the team in creating the best game possible. When these requests come in, it is recommended to

A realistic schedule, buffered for requests and unforeseen issues. 240

Our Project: Animation Team Management consult with the animators themselves on whether they wish to take on the work, so they have some ownership of the decision themselves. Leads must caution against referring to the team as “my” team. It is not “your” team; it is “the” team—the one on which you serve. “My” team misconstrues a feeling of protectorship with that of ownership, which is true only insofar as when the team fails, the lead is responsible, but when the team succeeds, the team is responsible.

Mentorship Mentoring mostly occurs informally by osmosis. Leading by example, and a corresponding animator following a lead, allow the mentor to pass on experience to the mentee, both good and bad. If ever there were an artform that would benefit from standardized apprenticeship training, it would be game development, with a junior animator so akin to an apprentice learning on the job. Formalized apprenticeships such as paid internships are a great way for smaller studios to fill less-crucial or demanding positions while training up potential future hires, but the best mentorship comes by way of pairing off an experienced animator with one less so who looks to them as a role model. Ideally, working together on the same project, discussing current problems, and asking for specific advice works much better than general advice on how to be better. Unfortunately, internships in game development are even rarer than entry-level jobs, most of which request a certain amount of experience, creating a Catch-22 for students. It is advisable to ignore the experience stipulation on job descriptions when first applying, as a good reel will count for more, and smart studios that weren’t initially considering an entry-level position would do well to bite at the chance of mentoring a student already displaying raw talent.

As a young animator looking to learn, the hands-down best way to get a better awareness of the goings-on around you is to speak to other developers and ask them what they’re working on. It may seem obvious, but game developers are far more often than not very happy to discuss their work with anyone willing to listen, especially if you are as enthusiastic as they are. It is highly recommended to engage with more than just animators. The best game animators cultivate a degree of understanding of all areas of game development so as to be most effective in any given situation.

Hiring There are few single decisions in a multiyear project that, if chosen wisely, will make all the difference to a team’s success from that day onward. Hiring the 241

Game Anim: Video Game Animation Explained right person for a spot on the team is one of those, while hiring the wrong person can be the single most costly mistake for a game budget, as excess time is sunk into righting that wrong on a daily basis. Hire for a combination of skill and attitude, not skill alone. New hires can and should learn on the job and improve their skills, but changing a negative or even lackluster attitude is next to impossible. In general, the best animation team hires, regardless of skill level, are self-motivated, autonomous actors who seek out opportunities to improve the game, displaying an appropriate balance of artistic and technical aptitude with a constant desire to improve. Each studio (and team within it), has a distinct culture, and the ultimate decision of an interviewer always rests on the question “Would I work with this person?” However, it’s crucial to be wary of simply hiring future colleagues similar to themselves lest the team become homogeneous in its creative output. Hiring to round out a team so it excels in a variety of criteria always makes for a better game. A variety of life experience, not just animation skill, allows every creative discussion to become more fruitful as team members come from a variety of angles, which often results in out-ofthe-box thinking. This can be the largest deciding factor between the most memorable games and those that fail to leave a mark and are consigned to the forgotten annals of gaming.

The Animation Critique Growing a thick skin is something every animator needs to do to be effective on a team. The surest path to mediocre animation is to refuse all comments, feedback, and criticism of your work. Animation as a craft can be so allconsuming that you often can’t see the forest for the trees and are unable to step back and see your work in progress with fresh eyes. That’s why a great initiative to implement on any team is a formal critique process, regularly showing and reviewing work in a group setting such that everyone can have input and can learn from others’ work. Not to be confused with design by committee, the final say still lies with the lead/director and the animator is free to choose which feedback to act upon. It is a valuable resource to have the combined talent and experience of a team contributing to the overall quality, as well as learning from each critique as a group. Held as regularly as desired (filmlike “dailies” would be overkill due to game development’s tighter timelines), meeting in a room and volunteering work to be reviewed for feedback creates an incredible motivator in seeing your peers’ work. Importantly, the lead should drive the meeting so as not to dwell on any one animation too long, with longer conversations taken “offline” for deeper discussion later. Animators who rarely participate (in either showing or vocalizing) should be encouraged to do so, with care taken to allow lessconfident animators a voice so that no one participant dominates. Beyond improving the overall quality and promoting stylistic consistency, sharing feedback is a great team-building exercise, making every team member feel involved in the common goals of the team. 242

Our Project: Animation Team Management Outsourcing External and/or contract work is becoming more and more of a standard practice as higher budgets and the boom/bust nature of game development teams make it difficult to scale up a team appropriately during and after a project. Due to this, many areas of game development that are considered less tied to gameplay (requiring rapid iteration), such as concept work, cinematics, and tertiary art assets, can all be farmed out to other studios with varying degrees of success. The biggest tradeoff is cost/quality. Even if the external artists are competent, the distance just makes the iteration loop that much harder, at best causing misunderstandings due to communication errors or at worst causing problems because the external team didn’t put their best artists on the task after an initial glowing art test. In order to dramatically improve the quality of external work, beyond supercharging that communication loop via internal team members dedicated to receiving work and giving feedback, it is vital to make the external teams feel as much a part of the main team as possible. Real-life face-time with at least their leads is an essential way to maintain a human connection. While there are more high-tech solutions—regular contact via an old-fashioned phone call while both callers have the same video(s) up is the best way to give feedback and share project news that the external contact can then forward onto the team. Just as with the main animation team, sharing the latest videos, concepts, and other exciting visual developments is an incredible motivator as well as regularly reminding them of the quality standards they should be shooting for.

243

(Courtesy of Konami.)

Interview: Yoshitaka Shimizu NPC Animation Lead—Metal Gear Solid Series You’ve now been at large studios in both Japan and North America that have a focus on high-quality animation. I’m sure readers would be very interested in how you perceive any differences in work culture? I think that game development in Japan is quite different from that of other countries. In Japan, we tend to have a generalist approach so each animator has several responsibilities at the same time. Japanese game animators need a variety of skills, such as handling tools, rigging, scripting, and so on. In fact, I have worked on planning the schedule, managing, animation supervising, operating capture sessions, mocap acting, creating animations, and so on as a lead animator. While this might sound extremely challenging, thanks to that we are able to acquire a holistic view of the game development process. On the other hand, in other countries animators tend to become specialists. Animation direction and management are separated and there are more supervisors than in Japanese game companies.

245

Interview: Yoshitaka Shimizu Individual members have fewer responsibilities and as a result, it’s hard to have a clear overall view of the project. I believe that both developing styles have pros and cons. In a perfect world, it would be best for animators to try their hand at as many challenging and varied tasks as possible at the same time for a few years … That would allow them to become specialists with a global understanding of the game development process, and as such, able to contribute a lot more to the team. You have an interesting combination of cinematics and NPCs on your résumé. Why have you focused on those areas in particular? I love MGS’s story, so I focused on cinematic animations. When MGS3 was released on PS2, the animators still had separate responsibilities: gameplay animation and cutscenes. I realized that sometimes these two areas lacked consistency. In MGS4 on PS3, I created a new animation team composed of ten animators that worked on both game and cutscene animations called the “keyframe animation team.” We worked on the following elements (including rigging): bosses, robots, and animals. We didn’t work on human characters (Mocap data) at the time. My responsibilities expanded as a team lead; however, I was still able to see the work produced by the whole team, so it was easy to find consistency issues between gameplay and cutscene animations. Regarding MGSV on PS4, it was difficult to work on both animations because everything had become a lot more technical than on PS3. I decided to solely focus on NPCs as I love the technical side, and because I believe NPCs are the characters who truly enable players to play the game by framing the context. They are a crucial element of creating a realistic world. AI characters are living on a virtual plane, and I would like to learn more about life structures and living things in general so as to be better able to create a new form of life in games. Metal Gear Solid 4 in particular had many experiments in interactivity during cutscenes. Was this something the team was consciously pushing for or was it just a natural progression of where you saw the medium going? Our game director doubled up as cutscene director, so basically he suggested new interactive ideas regarding the cutscenes from the very beginning of the project. If team members came up with good ideas, we sometimes tried to implement those. Our camera specialist and the game director worked in tandem and made all decisions regarding the camerawork. Of course, the game director had the final say, but we were really putting film direction at the heart of the project. You partially mocapped the GEKKO for MGS4 and were involved in the rigging. Were there any particular challenges you had to overcome in retargeting to these nonhumanoid characters? I think that as far as challenges go, the biggest one I had to overcome was that I sometimes had to do everything by myself. The GEKKO’s concept was that it was a “living” weapon. That’s why I suggested we use mocap. We weren’t sure whether mocap would be a good fit for the GEKKO’s movement, so had to run some tests. It was our first time using our in-house mocap studio and I had some colleagues help me at first and then I tried mocap acting by myself. After that I worked on the post-processing maker data and retargeted the data, and created the rig and animations on my own. The point here is that when we wanted to try new expressions, each animator did everything on their own. This way, when they found some problems, we were able to proceed while fixing issues without interrupting anyone else’s work. Additionally, it was easy to iterate—I think it’s one of the advantages of being a generalist. Since the GEKKO movements are myself, they are the most memorable characters! From that experience, I started to build our mocap pipeline seriously. Thanks to that, we were able to significantly cut production costs on MGS5. 246

Interview: Yoshitaka Shimizu You’ve worked on quite a few mocap-heavy action games, and have even ventured into the suit yourself! Do you find the mocap experience has helped or hindered your development as a keyframe animator? Of course, the more I handled mocap data, the better my understanding of physics and of what realistic movement looks like became. Actually, I did quite a lot of mocap acting! As an animator I have first-hand experience regarding the kind of amplitude each gesture requires in order to be able to capture and then reproduce natural movement with mocap. Thanks to that, I believe I became much better at directing actors for producing mocap, and conversely, the number of mistakes and retakes was significantly reduced. I would like every animator to fully embrace all aspects of mocap, including acting!

247

Chapter 13

Our Project: Polish and Debug In the closing laps of the race, the project is all but done and it’s all about putting it in a box or on a digital storefront. But the game has likely never been more broken, as everything is crammed in at the end. Now is the time to polish what’s there and fix what’s broken. But no team (irrespective of size) can polish and fix everything. Applying an equal level of sheen across the entire game is not nearly as valuable as making it more polished where it counts and less so where it doesn’t.

Closing Stages of a Project Before tackling bug-fixing, it’s worth knowing the closing stages of a game project and what will be expected of the game animator at each stage as the project wraps. Standard closing stages in the delivery of software, the Alpha, Beta, and final Gold Master, are often confused even by game developers, especially as some teams choose to ignore the real purpose of each stage and the checks and balances they are designed to provide.

249

Game Anim: Video Game Animation Explained

Alpha

Beta

Release candidates

RC1

RC1

Gold master

Important deadlines toward the end of a project.

Alpha While every studio’s definition differs slightly, the aim of Alpha is to achieve a content-complete stage for the project with the express purpose of allowing the entire game to be visible and playable by the team in its unfinished state, regardless of quality. For the animation team, this means the implementation of all gameplay animations and cinematics in one form or another such that they can be playable and/or understood by a play-tester as to what is happening in the cutscene or gameplay sequence. Sliding T-posed characters simply won’t cut it. The purpose of this stage is such that edits and cuts can be made that were previously impossible to appraise before the entire game was visible, revealing its strengths and weaknesses, overall pacing, and, crucially, what is and is not important for the core experience. Making edits or cuts to elements already added to the game even in rough form can be tough, but warning signs at this stage can and will have a massive adverse effect on the final game’s quality if they go unnoticed or ignored. Importantly, the contentcomplete nature of Alpha is a misnomer because oftentimes new content will have to be added to fix or make edits, but as long as these additions are planned for, then it shouldn’t blow out the schedule.

Beta Beta must come long enough after Alpha in order to allow time to implement the edits and other changes resulting from Alpha. By the Beta phase, ideally all animations should be “shippable” and the team should only be fixing bugs from that point onward. When fixing an issue, the animator is expected to also catch visual problems if they can be solved at the same time. Realistically, however, dedicated creatives will attempt to polish until the very last second. If you’ve spent several years working toward this point, then it’s only natural to give it your all before your work goes out into the world.

Release Candidates and Gold Master Unlike disciplines like programming, design, and other technical disciplines required to fix and optimize (plus audio and VFX that are always last over the finish line), the animation team along with other art-related fields will likely be hands-off at this stage. The final days are spent building potential “release candidates” of the game to submit for publishing, first-party console 250

Our Project: Polish and Debug approval, or disc printing (and in the case of digital self-publishing, actual release). QA will be joined by freed up team members in playing the game in order to find last-minute bugs. Even at this late stage, it’s best to hold off on vacations because animators may still need to be on hand to help fix any last-minute showstoppers, but at this stage any aesthetic bugs found will likely be waived. Rest assured that at this point, it will be painful to let go, as you will likely still be discovering visual issues that from the animator’s standpoint are easy to fix but can be ruinous if they break something else. As such, it’s too risky to be making changes to the game in the final weeks for anything other than progressionstopping issues or major glitches that will be encountered by everyone playing through. Once a release candidate has passed through quality assurance and been approved, then it’s considered a Gold Master—the final version of the game that will be delivered or uploaded to the world for players to experience. Now all that’s left is a well-earned break before returning and anxiously awaiting press reviews, hopefully followed by a celebratory release party with the team!

Animation Polish Hints and Tips The last few weeks of a project will be spent working through the (hopefully decreasing) list of bugs assigned to you by either QA or other team members to fix as many issues as possible and ensure the game is not only in a stable state but doesn’t feature noticeable visual glitches in the animations. Below is a selection of the most common animation bugs to look out for, as well as some tips as to where to focus polish and bug-fixing efforts in the project’s closing stages.

Foot-sliding When the feet don’t lock to the ground, they look floaty and greatly reduce a character’s connect to the world. Fixing in a single animation is simply a visual task, but this most commonly occurs either when the speed of a character’s translation and animation don’t match during something like a run cycle, or during bad blends, causing the FK legs to swing the feet out. For the former, the solution is avoided with animators completely controlling the speed of translation in the game. If, however, design can modify speed, then the animator must bring that new value back into the DCC to match the animation (remembering to remove the designer’s modification once done). The latter case is much more complicated. Each case will be unique, but they generally occur when blending between two animations, where the blend itself causes an awkward movement that noticeably distracts from the source animations. For example, if an idle has the left foot forward, but the run cycle starts with the left foot raised rather than the right, it will cause the 251

Game Anim: Video Game Animation Explained

Idle to run displaying foot-sliding during blend. left foot to move backward when blending to the running motion, which is undesirable. The solution would be to instead start the run cycle with the right foot up, so the blend accounts for the right (back) foot moving forward across the blend frames as the character moves forward from the idle to the run. Whenever blending, the motion of limbs and so on must be considered for the duration of the blend. We are essentially still animating between two “poses” when blending.

Popping Again, easily fixed within a single animation but more often occurring between them, popping is when the character noticeably jumps over a single or small number of frames, destroying the fluidity. Between animations, it’s usually not blending long enough between two poses that differ enough to be noticeable. When overly short blends are used, it is most likely for a gameplay purpose, so instead try to utilize longer blends or transitions with the ability to regain character control early via interrupt flags or body-part blends, as covered earlier. If none of these is the culprit, follow up with programming or the technical animators, as there may be a process running in the background causing issues. It’s always good to ask around first if you can’t immediately identify the cause of a bug, rather than simply sending the bug off to someone else, as it may still be your responsibility.

Contact Points During cutscenes and other times when the camera is close, the level of fidelity must increase to match. While during the fast action of combat it was acceptable for your character’s hands to slightly float above the

252

Our Project: Polish and Debug

Finger touch points before and after polish. enemies’ shoulders during a throw, in an intimate cutscene when the fingers must rest on a table, they need to match perfectly or they stand out as being wrong. Naturally, IK performs this job best, and depending on rig complexity and desired visual fidelity you can utilize IK setups for individual fingers if required. The most universal solution, however, is the ability to “snap” one object to another over a series of frames. For example, if one character’s hand touches another’s shoulder for a few frames, attach a locator or similar helper to the second character’s shoulders and use that to snap or constrain the hand location for the contact duration.

If there’s a task such as matching hands to weapons or other objects that often occurs in your production, consider having the technical team automate the process or create a tool that allows you to do it faster. This rule goes for any repetitive action that could be automated by a technical animator.

Momentum Inconsistency In player character animation, it is painfully apparent when the character is, for example, running full-tilt toward an obstacle, then, on performing a vault action over it, has a noticeable drop in momentum as they start the vault. A complete pass over all animations to ensure consistency and preservation of momentum between actions is essential once all gameplay systems are in place. If the action can be performed coming from a variety of speeds, then consider unique versions or transitions in/out of the action for each speed. A common example is an action like landing from a jump or fall that relies on the game detecting the ground collision to interrupt the falling animation 253

Game Anim: Video Game Animation Explained with the land. If there is too long a blend, then the ground impact will naturally feel too soft, but more often than not the detection may delay a frame or two, causing the character to be visibly stopping on the ground before even starting the land animation. Instances such as this can be discovered by enabling slow motion in the engine if available or, even better, stepping forward a frame at a time while playing the game—both essential engine tools for that final level of animation polish.

Interpenetration Coming with a variety of names (crashing, clipping, interpenetration, and so on), the intersection of 3D elements is a common occurrence in unpolished games, be it characters intersecting the environment or, more commonly, a character limbs intersecting their body—both of which are immersionbreakers that contribute to a game feeling less solid and polished than it might otherwise be.

Incorrect weapon placement can cause arms to interpenetrate. A degree of interpenetration is expected when the character is customizable (can wear different armor or clothes) or when animation is shared across multiple characters—especially those of different sizes. However, this can be minimized by animating to your larger, chunkier characters and considering variable sizes in your idle poses and so on (where crashing may be more prolonged and noticeable) by holding the arms away from the body somewhat. Also consider weapon or other item placement on a character as to how it might interpenetrate with the arms—keeping the waist and hip areas clear is recommended, with weapons cleanly stored on the back being optimal. Custom rigging can be utilized to move penetrating clothing elements, such as high collars intersecting the neck or head, away from the offending body parts, though this will need to be done in real time unless it can be prebaked with every animation. 254

Our Project: Polish and Debug

Assigning yourself bugs is a great way to create a task list toward the end of a project and allows you to hold onto superficial visual bugs that only you care about and wish to fix (time permitting), as you will likely be unable to submit changes to the game without a corresponding bug task past a certain point.

Targeted Polishing Why not polish everything equally? Because a thousand animations equally polished to a good level are far less valuable than the hundred or so seen repeatedly throughout the game polished to an excellent degree. Concentrate where the player will see it most and the perception of the game will be raised higher overall. If your game’s core gameplay centers around melee combat, then that’s where to spend your efforts. If it’s story, then perhaps the facial and other acting-related actions and systems should be your main target. It is a large part of the role of lead or animation director to understand what is key to the game. What are the signature moves or the ones with the most flair that will likely be seen over and over again and become synonymous with the game or individual character? Ensuring these individual actions or the systems that enable them are absolutely flawless and memorable will impart to the player an impression of quality where it matters the most. If the game has a story through which the player must travel to “finish” it, this is referred to as the “critical path.” While gameplay animations can generally be performed anywhere, independent of location, polishing one-off animations such as cinematics or environmental animations around the critical path should always take precedence over optional side content. Importantly, however, don’t selectively polish so much so that a gulf of quality is created between polished and unpolished animations where lesscritical examples look worse by comparison.

Intro scene

Sidequest 1

Sidequest 2

Story scene 1

Story scene 2

Story scene 3

Sidequest 3

Optional romance

Ending scene

The “critical path” denotes where to focus animation polish efforts.

Memory Management and Compression One major battle toward the end of a project, as every discipline rushes to get all their data such as animations, level art, audio, and so on into the 255

Game Anim: Video Game Animation Explained game, is likely hitting animation memory limits. Game projects typically establish memory allocations for each department at the start of the project, depending on each area’s needs or importance to achieving the final result. With each console generation, animation teams benefit from greater and greater space allocation, but we always find ways to fill it up, so it’s always good to work efficiently regardless. To overcome this, game engines use a variety of compression algorithms and other tricks and techniques to make each animation you export run with a minimum amount of memory footprint while still retaining as much quality as possible. Ultimately, however, it is a tradeoff between memory and quality, so it is highly recommended to manage memory throughout the project as well as to devise a plan early for dealing with memory compression during the closing stages. Too many games hastily apply a blanket compression value across the entire game’s animation rather than carefully selecting areas for more or less compression or using custom settings, resulting in games with high animation quantities sometimes displaying jerky movement across the board, even on important high-visibility assets. Depending on your engine, the ideal setup allows the technical animators to preset types of compression (not just percentage values) for individual or groups of animations depending on the type of action. For example, simple static/standing actions might favor avoiding foot-sliding by retaining a higher fidelity in the legs, whereas actions that have both character’s hands holding a two-handed weapon will benefit from higher quality in the arms, so the object doesn’t “swim” (float around) in the hands.

Original uncompressed curves vs the same curve post-compression. 256

Our Project: Polish and Debug That way, the settings of these presets can be adjusted to squeeze more space while retaining different levels of quality where you want it and further increasing the overall perception of fluidity and quality. Failing this level of control, at least ensure you are on top of compression before it comes to the last days to avoid necessitating a panicked drop in quality across the board right before ship.

Debugging Best Practices Test/Modify Elements One by One When the cause of a bug can be one of several potential issues, it is imperative that you work through each possibility one at a time rather than changing several variables in one go. If you fix it after changing several variables, then you can’t be sure which variable was the correct one. Searching for solutions should be a process of elimination, changing one element at a time so you can be sure which change fixed the issue—which only helps you recognize similar problems in the future to help narrow down the solution faster. Late in the project, while attempting to fix two bugs on the same asset in one go is recommended (i.e., a single animation that has two separate animation pops), fixing animation pops in two or more different cinematics then submitting in one version control change list is inadvisable. This is because one of these changes may break something and you’ll have to then go back to figure out which one, and if you submitted all the changes at once, you may have to revert all of them despite only needing to fix one. For maximum efficiency, work through your bug list in a methodical manner.

There are few things scarier than a bug that fixes itself when you don’t really learn what happened, because it will probably come back, likely at an inopportune time. If pressed for time, then it may be okay to let it go, but otherwise, it’s always recommended to determine what happened for future reference. Be equally suspicious of errors you can clearly see that “should” break something but don’t, such as a clearly mislabeled naming convention. It’s likely it’s just not happened yet, and you’re better off keeping everything in good working order, so fix it in advance.

Version Control Comments Correctly commenting in version control about each change you make, not just for bug-fixing but when iterating on an animation at every stage, allows you to go back to prior versions should you wish to make changes for bugfixing reasons or, more likely, when you may want to use the older animation to solve a task elsewhere. 257

Game Anim: Video Game Animation Explained

Useful version control comments enable backtracking. Moreover, there’s always the “hit by a bus” scenario where you fall sick or are otherwise off work at a crucial time, and your colleagues need to refer to changes in your work. As such, go beyond simple “updated” or “WIP” and similarly useless comments to instead detail what changed in the latest commit to help yourself and others get to the version they need in a bind.

Avoid Incrementally Fine-Tuning Take the example of modifying the length of a blend that if too short will cause popping, but too long and it will feel sluggish. If the blend is currently 2 frames, you could simply keep incrementing it 1 frame longer until it just works, but you may get to 7 frames and find you’ve gone too far, but should you go back to 6 or 5? Even worse, what if you reach 7 and feel it should still be higher and you wasted all that iteration time? In instances like this, it is recommended to modify in large increments, then take a hot/cold approach to which direction to try next. For instance, in the above example, it would be faster to jump immediately to 10 frames. We’re fairly confident it wouldn’t be higher, but we just need to eliminate everything above 10 just in case. Next, we drop back down to 6 (halfway between 2 and 10), and it feels even better, but it’s now a little too fast again. So next, we move it to 8 frames (halfway between 6 and 10), and now we’ve gone too far back to it being too slow. Now we know the best value is 7. Using this faster/slower, hotter/colder approach to “feel” out a blend time (or really any time you need to input a variable) is far better than simple guesswork and will ensure you arrive at exactly the correct number in as few iterations as possible. The more experienced you become, the fewer iterations you will take to arrive at the best look and feel.

Troubleshooting Every engine differs in its vulnerabilities and fragilities, so only through experience will you learn the particular common ways animations can break in your workflow. However, there are some fairly standard ways you can 258

Our Project: Polish and Debug break a game animation, regardless of the engine. Here is a quick list of what to look out for and how you might fix these problems once recognized. • Naming error/mismatch: The designer or programmer was expecting one name, and you gave them something else (usually via a typo). This is perhaps the most common way to call a supposedly missing animation and will usually result in a T-pose. Best check back through your exported filenames. • Wrong file address: Some engines expect game files to point to or be built from a DCC scene file. Here, the animation exists but is pointing to the wrong file, and will result in the wrong animation being created. Ensure your exported animation points to the correct scene and has no typos in the address. • Missing scene file: The same as above, but the file either never existed or was somehow deleted. Either way, if you find the animation asset is pointing to a file that simply does not exist, the first stop should be the version control of the scene, hopefully just requiring it to be restored. • Broken scene file: The scene file is either broken/corrupted or contains something it shouldn’t that is preventing export. It is most likely (especially if not referencing the rig/character) that the rig itself has been modified inadvertently, but may also be attributed to additional hidden elements existing in the scene. • Jerky movement: Within a single animation, the character pops and moves robotically, different from the original animation. This is most likely caused by the animation being overly compressed on export. • Frames out of sync: If, via game debug info, frame 15 of your animation in-game looks like frame 25 of the animation in the scene file, for example, then you likely entered the wrong frame count for export, starting 10 frames too late. Either that or a naughty designer scripted the animation to skip the first 10 frames. • Animation plays at the wrong speed: Same as above, but that same evil designer sped up your animation without telling you. Get the value from them and modify your animation to work with the same timing, then kindly ask them to remove their time scaling. • Character doesn’t end in the correct location: The game only knows where the character is located based on the collision, so if an animation ends and jumps the character back to the starting position, for example, it is

Collision erroneously animated away from the character. 259

Game Anim: Video Game Animation Explained











likely that the collision wasn’t following or was animated incorrectly. Best check it in the DCC scene file after viewing the collision debug display. Entire character is broken: The animations are likely fine, but the character model/rig asset itself was updated incorrectly. This is the most likely cause when every single animation for that character doesn’t work. Follow up with the rigging/character team. Animation explodes character: Usually an issue with the rig, animation scene file, or exporter mismatching the bones. When there’s a mismatch in bone count between the character and your exported animation, you’ll likely witness the correct animation playing on the wrong bones, which can look horrific. This is because engines often match bones not by names but via an index or list. If the count is off, even by one, it will affect all others in the list following the offending bone. This is not usually something that an animator alone can solve, so get technical animators or programmers involved. If the animation appears to double the character over, scaling up all joints to double their length in a freakish sight, this is usually an additive animation issue. It means the full source additive animation is combining with the underlying animations rather than just the offset between the additive animation and a selected pose. This is usually a technical issue in script or code, or the additive animation was set up incorrectly. Character position is offset: Animations play offset from where they should be. Most commonly caused by the incorrect scaling of the character in the engine or DCC, but can also involve the process the game engine uses to modify the origin of location-specific scenes like cutscenes. Ensure everything is set up correctly in the DCC if the issue only occurs with that one scene. Collision in-game can also push characters out from the environment or other characters, so it may need to be worked around or turned off. Character stretches back to origin: Select vertices of the character or other animated object stretch back to the origin, likely because these vertices are incorrectly skinned on the current in-game character asset, even if they are fine in your DCC. Follow up with the technical animators or character team.

Missing/unskinned vertices cause skin to stretch back to the origin. 260

Our Project: Polish and Debug

Understand that throughout development, you’ll be breaking things. It really is impossible to make an omelet without breaking a few eggs, and games are an incredibly complex omelet. As such, it’s important not to worry too much about these bugs appearing, but the more you fix them, the faster you’ll get at recognizing issues and ideally preventing them before they occur—making you a better game animator all around. If you still can’t figure out the issue despite the above list, here are some “Hail-Mary” tips to test if you’re just blindly stabbing in the dark, usually involving a process of elimination. • Point the game asset to another animation or scene file and see if that works. If so, then you know it’s your animation asset or scene and not the code/tech playing it in-game. • View your animation in an available in-game viewer if you have one; that way, you can be sure the animation itself has exported fine, and this time it’s more likely the code/tech playing it in the game. • Duplicate an in-game animation asset you know is working and modify from there, rather than starting from scratch. This ensures you have all the correct settings in place as a sanity check. • More complicated, but open the exported animation file in a text editor and compare with a similar one that works, looking for inconsistencies. This eliminates the exporter as a potential issue. • If all else fails and you can’t see what’s wrong in your scene, but you know it’s the culprit, blow away your old scene and rebuild from scratch (copying across your animation, of course). This nuclear option is sometimes faster than the time it takes to guess the issue with a seemingly innocuous scene that simply isn’t working.

261

(Copyright 2007-2017 Ubisoft Entertainment. All Rights Reserved. Assassin’s Creed, Ubisoft, and the Ubisoft logo are trademarks of Ubisoft Entertainment in the US and/or other countries.)

Interview: Alex Drouin Animation Director—Assassin’s Creed, Prince of Persia: The Sands of Time The original Prince of Persia was an animation pioneer and Sands of Time more than lived up to that legacy. What attracted you to take up the challenge of that series, and was that always a pressure? The legacy was more of an inspiration than a challenge. It ended up affording me more influence on the project because everybody wanted the animation to deliver the quality of the original title. I never really felt pressure because I sincerely never really thought about it and my only motivation and focus was on the end result— delivering a great game. We were just trying to make our character more real than the average game. Creating a new experience for the player, something magical; something that would fit in the tales of the Arabian Nights. What led you to the approach of using many more small transition animations for fluidity and control than was the norm at the time? First, I think it is important that I was working full time in a duo with a gameplay programmer and that our dynamic made the creation of that character possible. Often, I would start by making a small 3D 263

Interview: Alex Drouin previsualization of how it could look and then break things down into a flowchart. We were really aiming for fluidity more than anything else. There was a quality that we wanted to reach, and we would add animation transitions here and there to achieve it. As soon as it started to give us a great result, we began to apply that methodology everywhere and it became the way to approach and envision all gameplay recipes. Similarly, Assassin’s Creed was a pioneer in motion capture for gameplay. What made you unafraid to adopt this new unproven technology? As mentioned earlier, delivering a great game should always be the main focus. As I moved from Sands of Time to AC, we passed from a linear experience to an open world. The content of the game was a lot bigger and realism was a huge focus. As an animation director, I had to come up with game mechanic recipes, make flowcharts, supervise other animators, and I still wanted to animate at least 50% of my time. So how could I achieve that by keyframing the whole thing and also keep the animation team small? Since I already had tried a little mocap for Sands of Time, it appeared to me to be a great solution to quickly produce a lot of realistic data and level up the animation style and quality through all the different animators. Still, mocap could only be used for the main actions. All the various transitions and variations, which would end up being most of the data, would still be keyframed. The weight of the character was a key component in Assassin’s look and feel. Do you have any advice for animators looking to retain weight without sacrificing gameplay feel? That’s a tough one. First, I would say that I never try to achieve a style that I would describe as “realistic.” I’d rather aim at something that would qualify as “believable.” By keeping that in mind, you won’t approach animation the same way. Instead of trying to replicate life, you will try to emulate the feel of it. Human beings obviously do not react as fast as a videogame character. For a good feel on the joypad, actions have to be interruptible at any time. As soon as the player gives a new input, it needs to manifest on the screen. The first step is to stop the actual motion and start going toward the next one. In term of animation, the next one to play does not have to be the actual desired motion; it need only be a transition going toward it. That’s how we ended up having a massive amount of transition animations. It has to cover all changes in the character state, and if you want to get a reactive character, you will always have to sacrifice a minimum of believability. Then it just becomes a question of choice and taste to know where and when to manifest it. As for the illusion of weight, I would almost never use a straight mocap data file but instead make a lot of variations on it like taking frames out here and there to add snap or weight, add more up and down on the hip movement, et cetera. It is important to put it inside the engine to validate all your choices because the feeling could change from what you have animated inside your 3D software. Is there anything else you might like to add about your general philosophies on game animation? As a game is played, the game designers are constantly trying to communicate with the player, and the first channel is often via the character itself. The main role of an animation is to communicate the game design intention, so be knowledgeable of the design and what it is trying to say. In the conception phase, pre-render fake gameplay situations. It will give you a general idea of the different animations needed and how they are going to flow. You can then use this footage to start identifying the main branching poses. It is also helpful to communicate your intention with the rest of the team. Remember, an animation is only a little piece of a big puzzle and you need to focus on the puzzle and not the individual pieces. Make animations flow with those before and after and keep everything simple. Nothing 264

Interview: Alex Drouin should stand out too much to attract the eye, as the player is going to see them many, many times. The impression it gives is more important than the actual look of an animation. Of course, when animations are more important and intended to communicate more, focus on them. As you start pushing and assembling stuff in the game engine, you should focus on the pleasure on the joypad. Is it fun to play? Does it read well? Cutting or adding one or two frames here and there can change the flow and the experience of a mechanic. Work closely with the programmers; it is really important if you want to obtain a great result!

265

Chapter 14

2D and Pixel Art Animation A Brief History of 2D Game Animation While every year new games push game animation technology and techniques, today’s animators would never have reached these heights without the decades of ingenious game animators that came before them, often working with crude tools and within crushing limitations by today’s standards. Video games have featured animation since the 1980s, and learning about the medium’s origins is not only fascinating, but can be invaluable as many of the techniques pioneered in these formative years are still in use in 2D games today. It is impossible to talk about the origins of 2D video game animation without a dive into what is now known as pixel art. While today’s 2D games can be rendered with huge, anti-aliased images, vectors, or even flat polygons, the only option at the medium’s inception was the pixel. Art and animation featuring characters comprising individual pixels, known as “sprites,” was for years the de-facto approach to game development, and eventually techniques were improved to a point of rendering some 2D game animation to a timeless quality. 267

Game Anim: Video Game Animation Explained Sadly when 3D games came along in the early 2000s, pixel art was all but wiped out. Mirroring traditional animation in film, studios that stuck to the 2D approach saw sales drop no matter how masterful the art and animation or excellent the game design, as public tastes shifted toward explorable 3D worlds and the versatility of 3D animated characters. Those dark ages of early 3D video games rarely produced appealing results, and it was several console cycles before 3D games could again render animated characters to an appealing and timeless fidelity. Nowadays, stagnancy in blockbuster game design and a (perhaps misguided) pursuit of photorealism have contributed to a maturation of the gaming public’s appetite toward more varied and experimental experiences. As such, 2D game design and pixel art has had something of a renaissance as not only a more economical approach to riskier content creation, but without the memory limits of old, pixel art can replicate the beauty and fluidity of 3D offerings with a more attainable variety in styles. Not to mention the pervasive nostalgia for a “retro” look in now-older players. While the digital block-assembly of pixel-drawn video game art has been around since the late 1970s, its origins can be traced much further, from cross-stitching in the 6th-century BC Egyptian tombs to the first stone-tile mosaics of third millennium BC in Mesopotamia! Basically, any time humans can assemble complex imagery from smaller, standardized component parts, they will use whatever medium is currently available to them.

Capcom’s Street Fighter III series is often cited as the pinnacle of 2D sprite animation. (Courtesy of Capcom.) Whereas pixel art was originally borne of production and memory limitations, nowadays it echoes the late-19th century Impressionist art movement by representing a motion toward minimalism and simplicity—creating only loose representations of characters and worlds that would otherwise require high-definition recreation in AAA games down to the smallest skin-pores. Just as the video game medium plays to its strengths when we offer up as much authorship over the experience to the player, this visual simplification 268

2D and Pixel Art Animation

Throw on a couple of health bars and this 3rd-century mosaic from Kourion, Cyrprus could be a pixel art fighting game. provides space for the player to imbue upon characters their own idea of what their protagonist avatars really look like. Moreover, small game teams can’t hope to compete with the largest ones on a technical and budget level, so a distinctive visual style is not only production-friendly but can often prove more timeless than the latest creeping advances toward photorealism. Most important of all, however, is that great 2D art and animation direction simplifies the possibility space such that it pushes game design to the fore. Large 3D games must often expend person-years of effort just to get a character up and running around an environment, where stylized games cut to the chase and instantly begin experimenting on what really makes or breaks a video game—the gameplay. While 3D games’ screen contents and composition change dramatically under player control of the camera, 2D video games’ staging is more heavily authored. Every single shot of the game has been art-directed. Every colorpalette combination is thoughtfully chosen to balance the image. And most important for animators, the lack of interpolation in 2D animation demands every animation frame must be carefully posed and considered with no computer-calculated in-betweens. Of course, 2D games nowadays employ more than just pixel art, with more memory allocation and readily-available art tools allowing for a variety of methods to create non-3D artwork. 2D game art includes everything from pixel animation to traditional hand-drawn animation similar to classic animated movies. In this chapter we’re going to be looking at the full variety of different methods for creating 2D animation for video games, and the tools and techniques shared between them, many of which were instigated by trailblazers decades ago. 269

Game Anim: Video Game Animation Explained

Why Choose 2D Over 3D? Even more so than with motion capture, making the decision to employ 2D animations (and therefore the entire visual style and gameplay design) will affect every aspect of the project so it is something that must be carefully considered from the off. Here are some key arguments for and against going 2D from an animation perspective.

Pros • Low barrier for entry: Not only are 2D art and pixel art DCCs cheaper than 3D, but the closeness to traditional drawing means anyone can jump right in. Animating in 2D is far less abstract than manipulating body parts in 3D space. • Easily use classical techniques: Smears, multiples, squash and stretch, etc. are all instantly available to the animator with no reliance on rigging. Effects and post-processes are similarly easy to apply to drawings. • More artistic control : Every single frame is considered, and viewable only from the angle the animator chooses. In particular, the animator has exact control over the snappiness of motion, via bespoke held frames, without relying on the interpolation of 3D. • Tighter gameplay: Gameplay mechanics can be more water-tight when only in 2D, with fewer opportunities for the player to break the game than in fully-realized 3D worlds. • No need to wait for 3D models: Once designed, just jump in and start animating them. And if you need to add an item/object for a specific move? Just draw that too! • No rigs: Not being bound by specific rigging for every possible move allows more customization for each and every animation. There’s no concept of breaking the rig either, so characters can be pushed to extremes. • 2D is a bold stylistic choice: With 3D as the default, a lot of what makes 2D so prevalent these days is artistic appeal, and personal preference!

Cons • Less versatility for animation reuse: You cannot have multiple costumes or shared animations between characters. • No interpolation: Drawing every frame is a lot of work, whereas 3D animation curves aid interpolation between keyframes. • Characters cannot change during production:You need to redraw the entire animation set if character design changes, which is clearly undesirable. • Only one perspective: The camera cannot rotate or move into the screen beyond basic scaling, which limits gameplay types and visual variety to an extent. Animations always look identical to the player regardless of how many times they see them. 270

2D and Pixel Art Animation

Different 2D Production Approaches While pixel art was originally virtually the only method for game visuals, nowadays there are a variety of ways to create 2D animation in games. Here are the most common methods:

Pixel Art Animation The simplest, and perhaps easiest to pick up, involves drawing pixel art directly into the art/animation program before exporting to the game engine. There are now a plethora of free or affordable pixel art-specific DCCs supporting animation, so it’s never been easier to jump right in—find an updated list on www​.gameanim​.com​/book​/resources.

AZRI animated in pixel art. There are a few different techniques to animating in pixel art, with the most common and approachable being just starting straight away dropping pixels onto the canvas, adding each frame until the animation takes shape. Like the “straight-ahead” method of 3D animation, this shows fastest results and is the easiest to jump right into, but rarely produces the best animation. A more considered approach involves roughing out the frames in a simplified form, such as via stick-figures, sketched lines, or silhouettes, allowing for easier quick manipulation of the overall motion before any detail is applied to the characters. Silhouettes can be further worked on as blocks of color to better define the various body parts and how they move independently to support the overall motion. Another popular method, especially when a bank of frames has already been created, is to take a previous frame and modify it to create new ones, such as cutting sections (or isolating them via layers) then rotating, mirroring, scaling, etc. into the new desired position. From there it’s all clean-up. This is perhaps the fastest method to see results, though naturally limits what new kind of poses can be created so is best used sparingly. As always, understanding when to use each method is the key to a fast workflow, and will only come with experience the further into a project you are. Once you’ve hit on a formula, you will be able to apply it to each new character or move as they come up. 271

Game Anim: Video Game Animation Explained Traditional Drawings Pixel art animation of old took a leap in quality when exploration began on digitizing paper-drawn animation frames into the computer. Originally drawn in a pixel-like manner via graph paper then manually copied into game engines, and later directly scanned into the computer, using traditional media usually involves a clean-up process.

Before digitization, line work was drawn for all Street Fighter II’s signature moves. (Courtesy of Capcom.) For line drawings, this generally involves conversion to black and white (via desaturation and adjusting brightness/contrast) in order to remove any antialiasing incurred in the scanning process. Next, scaling down to a usable size while avoiding further anti-aliasing by employing “nearest-neighbor” scaling, then further cleaning up the image to ensure consistency of line width and pixel density. If images are already colored, some form of palette conversion will also need to take place to ensure each image/frame uses consistent colors from the specific palette. Photoshop “actions” (pre-recorded instructions for repetitive steps like the above clean-up stages) are a valuable workflow hacks to avoid monotonous repeated processes. Another popular action involves selecting a scanned frame’s outline, subtracting a couple of pixels in width, and creating a fill layer underneath the line drawing to instantly color in a scanned line drawing for easier editing or previewing inside the game engine as a solid color.

Alongside pixel art, drawing directly onto a stylus is perhaps the most common method for animating 2D video games. While pixel art aims to emulate the retro aesthetic, hand-drawn art can reproduce virtually any art 272

2D and Pixel Art Animation style used in traditional animation, making this method more approachable to anyone with traditional drawing skills. Importantly, the stylus streamlines the clean-up process by being all in the computer, usually involving drawing over sketched roughs with clean finished line art.

Rotoscoping Before motion capture, the best way to produce lifelike motions was to employ rotoscopy—the approach of essentially tracing over frames from pre-recorded video footage. Just as with motion capture, the more skeptical animator may view it as cheating, but it is yet another tool on the animator’s belt to achieve the desired result. It is even a valid production technique to

Another World is one of the earliest rotoscoped video games. (Courtesy of Eric Chahi.) rotoscope over 3D animation should that match the desired aesthetic. Importantly, this is different from using a reference as the desire is for an almost 1:1 match with the source material, rather than using the footage to inform separately keyframed animations. Like motion capture, rotoscoping ensures consistent on-model rendering of the character, provides more accurate in-betweens and better volume preservation, and allows for easier rotation and translation of characters in the third (into-camera) axis that is otherwise difficult in 2D.

Modular Animation/Motion Tweening Originally popularized by Adobe’s Flash, motion tweening is perhaps the most accessible method for rapidly creating vast quantities of 2D game animations, though not without its limitations. Borrowing methodologies from 3D animation, tweened characters are essentially composed of individual body parts that can be squashed, stretched, rotated, and translated in 2D. Animations are therefore limited to the 2D axes, with any motion beyond these axes requiring body parts to be redrawn with new images. Modern tweening animation DCCs like Spine go further and actually skin the 2D body parts to skeletal joints just like in 3D by mapping the images to

273

Game Anim: Video Game Animation Explained

Unruly Heroes mixes traditional animation with modular animation. (Copyright 2020 MagicDesignStudios. All Rights Reserved.) polygons, allowing animators to deform each body part as they would a 3D mesh, albeit only in 2D. This enables faux-3D rotation in the third dimension, giving the impression of twisting body parts as they appear to rotate but are in fact simply scaling or moving component parts of an image.

Understanding Historical Limitations While less relevant for hand-drawn 2D animation looking to take full advantage of modern hardware, pixel art games to a greater or lesser degree trade on their retro aesthetic. Though few modern pixel art games’ aesthetics adhere to their 8-bit or 16-bit console influences 100%, (instead targeting rose-tinted glasses), gauging where you wish to aim will do wonders for grounding your own pixel art animation and maintain visual consistency. Regardless of 2D approach, establishing technical settings early in any project is essential before beginning production as changing art at a later stage is a magnitude more difficult in 2D than 3D. While we no longer have the technical limitations of the classic consoles, understanding and emulating the retro aesthetic greatly benefits from establishing at least some limitations to better represent the games of old. Beyond just a workload consideration, sticking close to limited color palettes, pixel dimensions, and frame counts all contribute to faithfully emulating the classic look, so it is worth understanding what those limitations were (and why) before beginning your project.

Screen Resolution Primarily affecting how large pixels are onscreen, understanding and establishing your project’s screen resolution will essentially define how large your pixels display on modern screens, and therefore how detailed characters and environments can be drawn. Most of all, this defines which classic era the visuals most emulate. 274

2D and Pixel Art Animation Because modern monitors and HD or 4K+ TVs are a magnitude more pixel-dense than screens of old, a calculation must be performed to match your pixels as a division of a whole number to ensure your pixels render at a uniform size (“pixel-perfect scaling”). For example, an HDTV displaying 1920x1080 pixels should ideally render an image at 960x540 for double-size pixels, 640x360 for triple, and 480x270 for quadruple—each time the pixels become more blocky and reflect an older generation of console. Below is a table illustrating the technical specifications of classic consoles relevant to animation creation. Note that resolutions were for nonwidescreen aspect ratios so will never exactly match modern widescreen projects, and should only be used as a guide. Era

System

Resolution*

Screen Palette

Total Palette**

8-bit

Gameboy

160x144

Monochrome 4

Monochrome 4

8-bit

NES

256x240

54

432

16-bit

Genesis

320x224

61

1536

16-bit

GBA

240x160

256

32,768

16-bit

SNES

256x224

256

32,768

24-bit

Neo Geo

320x224

4,096

65,536

64-bit

Arcade***

384x224

32,768

131,072

* NTSC (Japan and US) where differing from PAL versions (Europe and Australia) ** Only an approximation due to the total colors available varying based on technical tricks ***Referencing Capcom CP System III Non-uniform pixels can be an aesthetic choice also, but uniformity is required to avoid unsightly stretched pixels. Also for correctly reproducing “scanlines,” a post-process used to represent a retro effect of TVs and arcade monitors of old.

Character/Tile Size Closely linked to screen resolution, character size onscreen is one of the key driving factors for which resolution to seek. How large a character is onscreen affects their relationship to how much of the world is visible—a gameplaydriven decision similar to the camera distance affecting environmental awareness in 3D games. Platform games tend to work better with smaller characters, so more of the environment is viewable onscreen, for example. To conserve memory, classic game environments were created via repeating tiles that fit together seamlessly when laid out together. Characters were similarly made of tiles to better fit into memory with as little wasted blank tiles as possible, as well as match dimensions standardized by the environment (e.g., jump heights may be three characters high). This standardization is still worth adopting today to make sizes and distances readable by the player for gameplay purposes. 275

Game Anim: Video Game Animation Explained

Classic Capcom arcade games’ memory allocation was represented by allocating sheets of card. Artists needed to print and cut up their sprites into chunks in any manner that they could to fit onto the card boards. (Courtesy of Capcom.) Gameplay and memory aside, another important factor when considering which size sprites should be is what kind of detail you wish to include. Smaller sprites will have a harder time showing facial detail than large ones if that’s important to your game, where conversely if you wish your game to be more abstract (and not to mention simpler to draw), then smaller is better.

Celeste’s small player character sprite size affords much greater environmental awareness. (Courtesy of Miniboss). 276

2D and Pixel Art Animation Ultimately, the more detail in each character frame, the more work will be required to draw them so that should be the most important production consideration. Simplified pixel art animation allows small or single-person teams to create entire worlds and memorable casts of characters that would create an otherwise insurmountable workload in high-definition 3D art.

Palettes Because classic consoles had limitations on which colors they could display (see the previous table), especially on how many colors could be used per sprite, historical game characters were drawn with only a small number of colors. In part, this was advantageous because too many colors and too complex a sprite can be messy and make it hard to read. As such even today it is worth limiting your color palette when drawing sprites.

AZRI sprite and the associated limited palette. Limiting the palette not only simplifies the visuals and makes pixel art easier to wield, it allows for the option to swap palettes via indexed colors (individual pixels point to a palette entry, such as color 01, 02 or 03, rather than just storing color information). While often used to provide simple variety in character costumes, palette-swapping can enable a host of effects like flashing a character when damaged or shifting the hue to red when on fire, for example. The number of colors you select for your sprites and game overall will, along with pixel size/resolution, contribute to the overall era it emulates. To best enable alternate color-schemes for your characters, it is worth color-separating elements as much as possible, even going so far as using different palette entries for same-same colored elements in some instances. (For example, if your character has both red shoes and gloves, you may want to use different reds for each to separate later.) Importantly, do not to use individual colors in other elements, such as a light skin tone in fair hair, as this will disallow changing skin tone later without adversely affecting the hair color and vice versa. 277

Game Anim: Video Game Animation Explained Sprite Sheets Rather than frames stored as single images, “sprite sheets” were the traditional way of delivering pixel art animations to the engine. Individual frames were essentially laid out across a larger area with specific X and Y coordinates attributed to each frame. The animations were played back in-game by updating those coordinates each frame, producing the effect of a flipbook. Sprite sheets were all about maximizing animation memory, and overall frame count for a character was limited by the size of the sprite sheet on which frames were laid out.

A sprite sheet from GRIS, automatically arranged onto polygons in TexturePacker by CodeAndWeb (Courtesy of Nomada Studio.) Modern-day sprite sheets and engines can place the images onto polygons, allowing even better storage of sprites as they can be more creatively packed into memory like a jigsaw puzzle, via a thankfully automated process. Even with that, modern games have far fewer memory limits on frame count than older ones, so the number of frames becomes more of a workload decision and contributes to replicating different eras of sprite games.

Retro Case Study: Shovel Knight Yacht Club Games’ Shovel Knight, a retro platformer Kickstarter success story in the mold of Capcom’s classic Mega Man series, is a great example 278

2D and Pixel Art Animation of modern design sensibilities with a faithful NES look. Despite releasing on more powerful modern consoles, self-imposed technical limitations were adhered to in order to produce what the audience feel is an authentic representation of the classic 8-bit aesthetic.

Shovel Knight’s retro aesthetic translated to limited animation frames—still more than possible in its 8-bit inspirations. (Courtesy of Yacht Club Games.) To achieve this, individual character sprites were limited to five colors. Black was used for outlines as well as to fill in form and to cut back on color needs, and shapes were kept big and chunky to fill the tiles. The world art was made of 16x16 pixel tiles and characters were multiples thereof. Gameplay dimensions such as jump height and length followed these dimensions as it’s a lot easier to describe a jump length in tiles than in pixels. Shovel Knight’s horned helmet, a key element of the character design, was kept as centered as possible and not moving around in frames, helping the character feel tight and reduce ambiguity in the world collision. While keyframe counts were more generous than actual 8-bit limitations, strict efficiency was maintained by only creating readable keyframes with no unnecessary in-betweens, using single static frames wherever possible. The player-run animation was 6 frames, whereas NPC runs were mostly 2. Very few non-essential acting animations were created. Pixel-perfect platforming gameplay required precise collision. To aid this, sprites needed to fill their hitboxes (collision—see the “Hitbox vs Hurtbox” section later in this chapter), which were also broken down into tiles of 16x16 pixels. Hitboxes fit into three distinct types: 279

Game Anim: Video Game Animation Explained • Attack: Outgoing damage from attacks. On NPCs it was generally smaller than the world and defense collisions so the player doesn’t feel unjustly attacked. • Defense: The area that can receive damage. For players this was smaller than their art and on NPCs it was a little larger. Leaning toward playerfavoring collision aids the players’ experience. • World: Similar to a 3D character’s cylinder/pill collision. This was the physics box that the character used to run into walls and walk on floors. This shape did not change across animations.

Shovel Knight’s characters and world consist of 16x16 pixel tiles, with various collision properties attached to each. (Courtesy of Yacht Club Games.) Once implemented, hitboxes were further refined with design iteration— often requiring modification of the animation pixel art. The player characters’ feet were always animated to the extent of the world hitbox during the idle and jump, and similarly relevant background art was drawn to fill the tiles as much as possible for visual connection between the characters and the world. Last of all, classic lo-fi visual effects were used as often as possible. These included screen-shake, sprite-shake, palette cycling, and speeding up or slowing down animations.

2D Game Animation DCCs and Engines Like 3D animation DCCs, 2D tools are all unique but have certain things in common. These are the most common elements a game animator will become familiar with when creating 2D video games.

280

2D and Pixel Art Animation

Standard elements of 2D game animation software. (Sprite courtesy of Jesse Munguia.)

Editor Screen Layout • Tools: Standard drawing tools for creation and editing in the main edit window. • Edit Window: The main area where all the editing takes place. Allows for easy zooming in for detailing and precise pixel placement or clean-up. • Preview Window: Utilizing a zoomed-in view and a second one representing the final size, working in the former and reviewing in the latter, offers the best of both worlds. Meticulous editing often requires close-up work, so a second window showing the full picture in context saves constant zooming in and out. • Layers: Separating different body-part elements onto layers, even on simple pixel sprites, allows for easier modification of individual components of an image, and affords the application of effects on chosen elements without affecting the base image. Retaining the line art on a layer allows for easier color filling underneath. • Palette: The available color palette for editing. Even for modern creations with fewer limits on the color count, it is essential to establish limited colors for each character to avoid having to compress them to a smaller number later in the process. • Timeline: Essential for animation, the timeline shows each frame in iconographic form for the assignment of frame timing in the manner of a classic “dopesheet.” There is no curve editor in 2D animation, but frame timing is manipulated here by setting frame hold lengths.

Required 2D Software This is a non-exhaustive list of the different types of 2D animation tools available for game animation. Find an updated list of these and other tools and their relative benefits available at www​.gameanim​.com​/book ​/resources. 281

Game Anim: Video Game Animation Explained Pixel Art Animation: Aseprite At an affordable price to just jump right in, Aseprite is perhaps the cleanest and easiest DCC for pixel art animation. Deceptively basic, it features virtually everything you need to get started drawing and animating in the classic style.

Aseprite. (Courtesy of Igara Studio.)

2D Art All-Rounder: Photoshop While expensive, Adobe’s generalist 2D art and animation software will also cover aspects of your game creation well beyond animation. What it lacks in animation and sprite functionality is made up for by general competence in all areas of art creation.

Photoshop. (Courtesy of Adobe.)

282

2D and Pixel Art Animation Modular Animation: Spine The leading software for skeletal animation in games, allowing for a less linear development of 2D animation with some of the benefits of 3D animation such as deformable meshes and interpolation between frames.

Spine. (Courtesy of Esoteric Software.)

Sprite Sheet Editor: Texture Packer While more on the technical side, it is important to understand the process required to get your 2D animations working efficiently in any game engine. 2D game animations are generally exported to a sprite sheet, and this editor rearranges them into the most memory-efficient layout possible for integration into the engine.

Texture Packer. (Courtesy of CodeAndWeb.)

283

Game Anim: Video Game Animation Explained Game Engine: Game Maker Studio While the leading 3D engines, Unreal and Unity, can also support 2D, they can be bloated with many features unnecessary for 2D. Game Maker Studio is perhaps the premier engine focusing only on 2D games, with an interface designed to be as accessible as possible.

Game Maker Studio 2. (Courtesy of YoYo Games.)

General 2D Workflow Tips This chapter was created by interviewing several experts in the field of 2D game animation. Here are some of their collective top tips for how they successfully bring to life appealing characters. • Try to initially make your animation with as few key poses as possible to sell the action, adding in-betweens only when necessary. • Like 3D previz, aim to get animation sketches into the engine to prototype or playtest ASAP. Color or even finished line art is not required before exporting to the engine to playtest. Even background art can be in sketch form at this stage. • Roughing in an animation via straight line art is similar to traditional drawing, and allows for more detail than silhouettes alone, but more detail makes iteration slower. Instead, try to keep frames loose and sketch like-until the animation is working. • Fleshing out an animation in color blocks (different colors representing the various body parts) is perhaps the fastest way (after silhouettes or line art) to see how an entire animation flows while also providing enough information to easily track the different body parts. • While not always necessary or recommended due to potential complexity, separating different body parts onto layers allows for editing individual components of a frame without needing to redraw everything underneath. 284

2D and Pixel Art Animation • As with 3D, build out any character by starting with the idle, then next focus on key moves such as base navigation. The character style should be honed here before copying to other moves to minimize rework. • When creating multiple characters with a similar move set, formalize your key animations first (idle, walk, jump etc.) so they can be compared to others with similar poses and keyframes, thus creating the motion equivalent of a character lineup. • Frame counts for different strength attacks (when considering the ratio between slow/strong/large/heavy and fast/weak/small/light) can be better balanced by first establishing an average attack then creating stronger and weaker ones with more and fewer frames respectively to nail the timing. • To initially concept how elements of an animation might move, draw expected motion splines on a single underlying (therefore visible via onion-skinning) frame and mark dashes along that spline as frames to plan out both motion and timing. Early instructions for held-frame timing can be drawn via similar dashes along a line—albeit vertically or horizontally to the side, independent of actual motion. • Make use of layer transparency to retain an underlying finished drawing of the character for consistency when beginning new animations. Keeping that drawing to the side of your new animation is an equally valid approach. • While most animations naturally start from the idle pose, begin new animations by copy/pasting the head. This will quickly establish how much the head needs to change for the new keyframes, if at all. As more moves are created a bank of different heads from which to start will naturally build up. • After the head, other body parts like the main body mass can be sketched in to rough out the entire animation, providing the earliest idea of how the animation will look for timing and editing purposes. Combining this layered approach with pre-drawn motion splines as a solid way to build up your character’s overall motion. • Similarly, aim to keep key detailed sections like the characters face or patterned clothing more static during idles, initially just duplicating for each frame. Too much movement blurs complexity, and low-res (highly pixelated) games especially need these details for players to latch onto. • If using software that supports it, create automated macros or actions for repetitive tasks. Any automation you can create for your workflow, generally surrounding the import/export or clean-up process, will do wonders for your schedule when creating hundreds of 2D animation frames, and will aid in rapid iteration. • If you can afford it, invest in a pen and stylus. While a mouse/keyboard combo works for 3D and can work for pixel art, nothing matches the tried and tested method of drawing with a pen when creating fluid 2D animation. Especially when beginning animations with loose sketching. • Selective outlining, the process of blending black outlines to better match the color block they are surrounding, can be made more efficient by converting line drawings to an “overlay” layer, such that it darkens 285

Game Anim: Video Game Animation Explained rather than completely overrides color underneath, while also allowing for easier coloring on the underlying layer. • Once detail polish begins, 2D and pixel art animation follows the “straight-ahead” approach of traditional animation. Like 3D game animation, the aim is to leave polish details as late as possible in the animation process. Making changes later after the detailing process can require significant rework, so attempt to nail the gameplay before this stage.

2D and Pixel Art Game Animation Concepts There are many different workflows and combinations thereof for producing 2D game animation, depending on the style of visuals and personal preference, the basics of which are covered earlier in this chapter under “Different 2D Production Approaches.” The following is a collection of workflow essentials that are good to understand irrespective of approach to enable new opportunities for improvement in your game whatever method you employ.

Outline Clean-up Once the rough animation is fairly solid, drawing clean line art on a new layer is the safest way to begin cleaning up 2D or high-res pixel art. This works even better if color blocking was used to rough in the animation as the line art outlines the shapes already determined. When working from scanned traditional art, desaturate the line art (and/ or adjust brightness/saturation to force black lines only) to remove all color info. To create pixelated art from higher-resolution artwork, scale down via the “nearest-neighbor” setting to avoid re-application of anti-aliasing giving unwanted alpha or gray values. Any desired anti-aliasing in pixel art should be hand placed rather than rely on scaling to ensure pixel-perfect control over the motion. Ensure line art thickness is consistent throughout, avoiding unnecessary doubled pixels unless desired for the art style.

Outline double-pixels are generally deemed unsightly and should be removed as part of the pixel clean-up process. Replace them only tactically to add sharp edges to corners as desired. 286

2D and Pixel Art Animation Coloring Unless roughed in via color blocking, the process of coloring should only be started once the animation is more or less locked. Be wary of too many colors making the frame too cluttered and unreadable, sticking with limited palettes helps avoid this. Importantly, shading with gradients requires more thought than simply moving from darker to lighter shades uniformly across a shape, and instead should be used to describe form. Picking a light-source direction for the entire sprite set (or rather, the entire game) for consistency, light and shadow should be utilized to show planes and shapes with the light hitting different angles non-uniformly.

The top sword displays planes lit by a specific light source while the bottom simply gradients in from the edge which is less desirable. Outlines help with the separation of different character elements. Art styles that omit them rely more heavily on color separation to show different body parts, especially when passing in front of the body or behind other limbs. A common trick is to shade limbs further from the viewer a different color (usually darker) to promote separation as they pass in front of and behind the body and other limbs. Selective Outlining (“Selout”) is the technique whereby outlines do not remain black but instead match the color block. This creates a less harsh contrast between color and outline and essentially utilizes the outline as an extra level of shadow or depth. This can be drawn manually or by implementing the outline as an overlay (modification) layer atop the underlying color.

287

Game Anim: Video Game Animation Explained

Street Fighter III’s Hugo before and after selective outlining. (Courtesy of Capcom).

Sub-Pixel Animation When subtle movements smaller than a single pixel are required, the animator must employ what’s known as “sub-pixel” animation, because it is impossible to draw smaller than a pixel. This involves creative use of the palette to give the impression of motion transitioning a smaller step than a single pixel in the same manner as anti-aliasing bridges the gap between two pixels via in-between colors, giving the impression of motion transitioning between pixels.

Sub-pixel animation allows for single-pixel distances to be traveled over longer than a single frame using an approach similar to anti-aliasing, giving the impression of pixels moving less than one pixel every frame. Sub-pixel animation requires the palette to contain gradients between colors to be effective, so they can become more expensive when animating singlepixels that border multiple other colors. As with 3D animation, using different timing for different areas of the character will prevent the eye being drawn to one particular spot and give life to the overall animation, so mixing sub-pixel animation on some areas with regular non-anti-aliased pixel-shifting is a valid approach. Regardless of the method, this allows for smoother animation 288

2D and Pixel Art Animation between frames, especially on detailed areas like the face where subtlety is required and pixels can’t so easily jump between one frame and the next.

Less elegant than using in-between shades/colors, employing doubled-up pixels can also achieve a similar result, though the motion appears to be stretching or creeping rather than moving between pixels.

Character Design Considerations Character design should consider limbs being different colors from the rest of the body to better show them passing in front or behind during actions. A character all in one color looks good when limbs are separate from the body, but will become less readable in a leg-crossing passing pose during a walk cycle for example. Nintendo’s Mario was famously designed with dungarees (perhaps driving the whole plumber setting) to promote separation between arms and legs. As with 3D characters, look for opportunities to provide overlapping actions, such as hair and loose clothing. These will be final flourishes added once the main animation is done, but can help immensely with settling. Due to the historical limitations on frame count, overlapping action was often performed on only a section (or single tile) of the character so as to not require an entire new sprite for memory. As such, it is worth holding the pose of a character and animating only the overlap should a retro style be the goal. Finalizing the head, (and by extension the face) first on any new character is perhaps the best place to start polish as it must remain consistent across many frames to remain readable. It forms a key part of the identity of any character, and will help establish the personality across the move set. Often the face does not change at all across multiple frames to retain readability.

Shovel Knight’s run cycle does not modify his iconic helmet beyond moving it up and down, keeping it clean and readable throughout. (C ourtesy of Yacht Club Games.)

289

Game Anim: Video Game Animation Explained Framerate Very few 2D games are animated at 30 frames per second like 3D games; otherwise, the workload would quickly become untenable with the lack of automated in-betweens. However, the game will likely run at 30fps or above. As such, establishing how long your frames are held will determine the fluidity of motions. For example, a 6-frame animation cycle lasting 1 second will hold keyframes 5 frames at a time when running at 30fps (30/6 = 5). Thankfully, your character entity will update/move around the screen at 30fps or above, so the actual sprite motion can be as responsive as the fastest modern games and will still appear fluid. For more appealing 2D animation, a varied framerate across individual animations is highly recommended. Holding hey poses for longer then quickly passing over in-betweens is often just as appealing as an even higher frame count animation done at a steady rate. Mixing up frames held on ones, twos (or in the example above, fives) can have a far more interesting rhythm with a more manageable workload than creating every frame.

Frame Count Frame count, or the number of keyframes used for a particular animation, creates a direct correlation between both the fluidity of the motion and the

Mega Man’s minimalist 4-frame walk cycle is actually 3 as the passing pose is reused. (Courtesy of Capcom). amount of work to complete it. As such, no more than is absolutely necessary to make the animation readable is always the best approach—at least initially until a polish pass. As with all kinds of game animation, you will get more bang for the buck in player-facing animations, so save your highest frame count animations for the most commonly seen, usually the idle and walk/run cycles or unique character-specific moves. If animating in a retro style, while the minimum frame count for an animation is technically 2, using 3 or 4 will allow limbs to swing back and forth with in-betweens. To make a minimalist animation go even further, animations in a rotational motion generally look more appealing than oscillating motion, requiring a minimum of 3 frames. 290

2D and Pixel Art Animation “Boiling” is a technique used to give life to outlines even when the object/ character being drawn is static, and requires a minimum of 3 frames to achieve (otherwise it’s just shaking/jittering back and forth). This aesthetic is best created by retracing the same line by hand due to natural imperfections.

Boiling between three hand-drawn identical poses. Though the fruit is static, the outline boils to give some life. This technique is generally less desirable in pixel art than in hand-drawn 2D due to the former’s desire for clean outlines. Advances in machine learning are producing results that can automatically generate the in-betweens for 2D animations, rendering even classic animations at a far more fluid rate than was initially intended. While the results are subjective, one expects automatic tweening between hand-drawn frames to become part of the 2D animators workflow at some point in the future.

Modular Animation Hybrid Workflow Modular animation (or motion tweening) is a unique aspect of 2D video game animation in that it shares several of the benefits of 3D animation, such as the ability to adjust characters later in production due to animations comprising rotation and position of rigged 2D meshes. This hybrid approach provides a

Modular animation involves animating flat images skinned skeletons moving, rotating and scaling flat 2D meshes. (Courtesy of Esoteric Software). 291

Game Anim: Video Game Animation Explained huge production cost saving compared to hand-drawn 2D, although limiting some of the expressiveness available to frame-by-frame 2D animation. Characters are initially drawn in a separate software like Photoshop, ideally in an idle pose from the desired gameplay perspective, before being cut up into component parts for rigging in an animation software like Spine. New component parts of the character can be drawn and added as desired when the initial set cannot recreate the desired motion through rotation alone (generally when twisting motion or radically different expressions or poses are required). As more characters with similar move sets are created for your project, it will be easier to anticipate the different kinds of modular parts required to bring a character to life, but to begin with it is essential to schedule time devoted to additional assets. As with 3D, if the character artist is not also the animator, ensure a smooth workflow where new assets can be prevized by the animator for temporary use until the final asset is produced and replaced.

Onion-Skinning Far more useful in 2D than in 3D animation due to the relative simplicity of the forced camera perspective, onion-skinning is a valuable tool to show frames before and/or after the one being worked on, emulating the pageflipping of traditional animation and a lightbox. Offered by most 2D DCCs, the number of frames ahead and behind is variable. Onion-skinning is most useful in creating in-betweens for insertion between already-established key poses as you can see where you came from and where you are going simultaneously.

Onion-skinning helps visualize frames before and/or after the current one to best see how a motion flows from one frame to the next. 292

2D and Pixel Art Animation It is highly recommended to begin new animations with the pre-established idle (or relevant animation this new one comes from) still visible underneath to stay on-model and ensure consistency of proportions and volumes. Regardless of the situation, onion-skinning works best with line art or silhouettes as too much finished detail will be difficult to parse when multiple frames are overlaid. It is also best used on large motions over smaller subtle movements as the underlying images tend to overlap too much and become indiscernible in the latter case.

Isometric Sprites Isometric games are a great way to attain a visually impressive faux-3D effect while still being created and rendered in 2D. Although environment tiling becomes more challenging and requires more complex placement and ordering, sprite creation is not too dissimilar from straight-up 2D.

Isometric games like Syndicate (1993) were some of the earliest to provide more fullyfleshed out word exploration before the advent of 3D. (Courtesy of Electronic Arts.) With an orthographic (non-perspective) camera angle established early on, isometric titles usually use one of a handful of setups—due to the few angles in which pixel lines can render well. These angles are determined by consistent rise-over-run values (like a staircase).

Raw pixel line angles good vs bad (in red). Only angles with consistent rise over run look good without requiring additional anti-aliasing. 293

Game Anim: Video Game Animation Explained One angle must be chosen at the start of the project and maintained throughout for every art asset, usually based on the graphical projections in the table below. Projection

X Angle (Pitch)

Y Angle (Yaw)

Pixel Ratio

Used In

Isometric

30/45/60

45

2:1/1:1/1:2

Syndicate, Desert Strike, Q*Bert

Trimetric

30/45/60

–60/– 30/30/60

3:1/1:1

Fallout, Crystal Castles

While regular 2D sprites are drawn once and flipped horizontally to cover both directions, isometric sprites often require at least two versions for each move to cover characters facing both up and down, then mirrored left and right for the four isometric compass directions, and can even require all eight real-world compass directions be catered for (five unique sprite angles in total before mirroring). As such, workflow efficiency is paramount. It is highly recommended to use some kind of 3D previz or camera reference set up to emulate the chosen projection of the project.

The Banner Saga gathered animation reference from an isometric viewpoint with ground reference markers. (Copyright © Stoic.) Setting up a camera in a 3D DCC is fairly standard using the angle information above, though doing so for real-life footage will require some work to get right. Try drawing out a grid on the floor to match the scale and angle of your chosen tile size. This will have the added benefit of giving you distance information as your character travels various tile-lengths during actions. Modern isometric games are great opportunities for when rotoscoped 3D animation might be employed to produce satisfying yet consistently repeatable (for each angle) 2D animation.

A solid method for establishing how your isometric sprite holds up, especially when not drawing from reference, is the “turnaround.” Starting with a single-frame idle, drawing all your four or eight directions then looping them in sequence to create an on-the-spot rotation animation will instantly highlight any areas where your character is inconsistent or offmodel at different angles. 294

2D and Pixel Art Animation

Streets of Rage 4 hitbox in red, hurtbox in green. An additional check at the feet is required to ensure characters are aligned on the same depth plane for a hit to register. (Courtesy of DotEmu).

Hitbox vs Hurtbox The 2D equivalent of collision, boxes are drawn (or rather, coordinates entered) around body parts and similar elements to divide up 2D animation frames into areas that both deal (“hitbox”) and are susceptible to (“hurtbox”) damage, or behave as collision with the environment. When a hitbox intersects a hurtbox, a hit is registered. Depending on the complexity of gameplay (if attacks can be ducked or dodged for example), hitbox and hurtbox placement can take the form of numerous boxes per frame to conform to the sprite image to a greater or lesser extent, but all are invisible to the player during play. Hitboxes generally become active on the first frame of an animation where an attack is to register, such as an outstretched punching arm, and linger only as long as a delayed impact might actually still read well, so careful timing is required in addition to placement. Collision boxes can contain a variety of information such as attack priority such that while some attacks can overpower the current one in the instance of trading blows, others cannot. Hitbox and hurtbox placement and fine-tuning is an essential task on playervs-player (PvP) games where even a pixel off can mean one move becomes too overpowered vs another.

It is important to conform to the drawn sprite image as much as possible to avoid player frustration when they believed a hit should have registered, but did not, or vice versa. That said, many classic game hitboxes were generally limited to full tile-sizes so oftentimes animating characters to stay within certain bounds will allow for more simplified collision.

295

Game Anim: Video Game Animation Explained Beyond attacks, collision boxes are used for a variety of other elements too, such as collision with the world, hazardous environmental elements, triggering an event that registers the player touching an item (like a gold coin or power-up), or other volumes that send events to the game such as the player passing a finish line.

Background Animation A great way to bring your 2D world to life is the insertion of incidental background animation. This usually takes the form of non-interactive sprites overlaid or adjacent to static tiles or imagery. For more classic-styled visuals, not to mention to standardize level design, backgrounds are often created via repeating tiles—primarily to conserve memory. Background animation sprites, like characters, ideally fit into the same tiling metric. Even contemporary 2D games generally render backgrounds with repeating assets that fit together seamlessly, so background animations are created similarly to character animations with the caveat that they must seamlessly integrate with other background art. To make this easy, it is best to begin with a static element of the background, cut it out into a tile size that minimizes memory, and create additional frames that allow the image to loop. If the background animation is not self-contained but instead part of a repeating pattern itself, like a repeating flowing water tile rather than a single tree or bush moving in the wind, the task becomes more complex as the animated tile edges must match such that the motion continues beyond the individual tile. The simplest example of this is a conveyor belt, where moving elements of the belt arrive from one side of the tile and exit the other so when placed in sequence the multiple tiles’ motion appears to travel the entire length of the placed tiles. For this to work effectively, the moving pixels must be a division of the tile size so the motion matches length. For example, a 16x16 pixel tile can have four detail elements every 4 pixels, two every 8, or 1 every 16 to sync up motion when exiting and entering the tile.

296

2D and Pixel Art Animation

A conveyor belt tiled animation. Elements leaving one side of the tile must reappear again from the opposite to successfully cycle. If your art style employs flat colors, motion can be inferred across a larger space by selecting only parts of the tiled background for motion—another memory saving and minimalist aesthetic. For example, a small blue pond may need only the edges to be animated, suggesting water lapping where the water meets land, and the viewer will feel the motion of the water across the pond’s entire surface. This can be further supported by adding individual elements across the pond, such as bobbing water lilies.

A stylized pond need only animate certain detail elements to give an overall impression of movement. 297

Game Anim: Video Game Animation Explained When it comes to moving backgrounds, less is definitely more. Not just to save work, but motion draws the eye and too much motion can distract from key gameplay elements. Depending on game type, the player needs to be able to survey the entire screen and distinguish interactive elements like enemies from purely aesthetic motion. This is helped by background art generally using more muted palettes than character and enemy sprites too.

Parallax Scrolling While more of a programming task than animation, the animator should be involved to a degree in all motion-related elements of a game. “Parallax” motion describes the displacement of objects at different speeds relative to your current viewpoint, and has been used to great effect from the 8-bit era onwards to give an impression of depth to otherwise static video game backdrops. Background imagery or tiles are essentially moved (or “scrolled”) at different rates in the X and/or Y direction from the standard gameplay plane, with the general rule being the further away from the viewpoint, the slower the motion. Conversely, foreground elements should move faster. Depending on the number of different layers/speeds employed in parallax scrolling, it can be used to great effect to give a sense of moving past large sweeping vistas, albeit in a 2D game.

Different layers of a rendered 2D scene scrolling at different speeds gives the impression of depth via parallax. While used more rarely, a unique rotational/panning camera effect can be recreated by rotating background planes in the opposite direction of gameplay and foreground elements. This can be confusing unless made clear to the player that is what is happening, and will be aided greatly by placing a rotating element screen-center, such as a tower or other column-like structure with some kind of rotational sprite animation on it to support the overall rotating effect. 298

2D and Pixel Art Animation Parallax may also be programmatically performed on a per-scanline (horizontal row of pixels) basis to give the effect of 3D skewing of ground planes. Used effectively, it can produce a faux-3D effect of perspective on otherwise flat artwork. Combined with sprite-scaling, scanline parallax is used to great effect on early 2D racing games and flooring on 2D fighting games. Essentially the more layers moving at different rates, the eye will be tricked into perceiving 3D depth.

2D Visual Effects Animation Visual effects animation is a unique skill requiring a good understanding of how different particles move at different rates as well as change over time, but must be drawn all at once in a single image. Fire licks abstract shapes as it moves away from the source, turning into billowing smoke before eventually fading out, water splashes and dust clouds appear with directionality in response to impacts, splashing up then landing or fading to nothing respectively, and explosions are near-instantaneous eruptions of flashing color before moving slower as they fade out. An effect’s lifetime (length) is generally driven by how many may appear onscreen simultaneously as much as it is workload. Unlike 3D, however, the creation of effects does not require a completely different pipeline from character animation, so is generally done with the same process of progressing frames. Another strength of 2D effects animation is that standard DCC effects can be baked into the static images, such as motion-blur, smearing, multiples etc. without the need for costly post-effects or unique character model and rigging additions.

Mark Ferrari’s color-cycling is the pinnacle of the technique, employing clever cycling over not just solid massed like the waterfall but also as rain falls down over the complex scenery by clever arrangement of colors. (Courtesy of Mark Ferrari)

299

Game Anim: Video Game Animation Explained

Effects in 2D games also encompass visual feedback, such as shaking sprites to signify damage, flashing different color palettes on a whole sprite to show status-changes such as being set on fire, and for a split second freezing either the whole screen or relevant characters to signify landing a hit, known as “hit-stop.”

A distinct animated effect unique to pixel art and provided by limited palettes is “color-cycling.” This involves stepping a series of colors, usually a gradient, forward one color at a time in a repeating cycle. When the gradient is drawn in a scene, for example, a waterfall such that the colors stack vertically, the cycling gives the impression of the pixels progressing downwards as if the water is falling down. For added fidelity, procedurally blending between different color values rather than just stepping between them provides an even more fluid look.

Modern Case Study: Streets of Rage 4 Streets of Rage 4, a masterful sequel to the classic series lain dormant for 26 years, showcases a great mix of new and old 2D animation techniques with a faithful respect for the original subject matter. SoR4 represents modern 2D games that eschew the limitations of old to provide a fluidly animated highdefinition visual feast of 2D art and animation while greatly benefiting from historical 2D game development methods. The notable inclusion of classic sprites from the original games, scaled up to match character heights of the modern game, provides a rich visual comparison. SoR 2 and 3 sprites were scaled up 4x to match the contemporary 1080p visuals, while the series’ smaller original sprites required 5x scaling. All classic sprites were ripped in their entirety requiring only adjustments to hitboxes to match the modern gameplay. The authentic gameplay feel of all characters was recreated by playing the new game synced with the old simultaneously, both playable on the same controller to match frame-for-frame precision.

Streets of Rage 4 featured pixel art from multiple games in the series, scaled to match the modern-day equivalents. (Courtesy of DotEmu.) 300

2D and Pixel Art Animation SoR4 plays with hitboxes a lot to favor the player (more possible than a player-vs-player game), modifying hurtbox size on the fly to make it harder for enemies to land a hit when the player is moving, and scaling it down gratuitously when jumping. Conversely, enemy hurtboxes remain large when being knocked backward to enable cooperative players “juggling” enemies back and forth in the air.

The robust timeline affords frame-specific control over everything from animation frame timing, image position and rotation offsets, collision boxes and the application of interrupt frames, VFX or gameplay rules. (Courtesy of DotEmu.)

301

Game Anim: Video Game Animation Explained Environmental animation is rich and employs a variety of layering, both in foreground overlays and effects like steam and background animated sprites, to truly envelop the 2D character art in the world. A standout feature is the dynamic character lighting, especially the rim-lighting—pixels next to alpha transparencies are automatically recognized as the character’s edge, and lights placed in the environment emboss the sprites with colored highlights from the correct direction.

Streets of Rage 4 layered environmental effects and animation, better grounding the characters in the world. (Courtesy of DotEmu.)

302

(Courtesy of DotEmu.)

Interview: Ben Fiquet Art Director & Animation: Streets of Rage 4 You have a distinct art style—did you have to modify or unlearn anything to bring it closer to the already-established look of Streets of Rage, such as frame limitation? Compared to Wonder Boy, our previous game, everything is more realistic so mainly I had to step up my game for drawing cool anatomy and muscles. The style of Streets of Rage, even if more defined than an 8-bit game, had a lot of leeway to interpret the world and characters. I tried to focus on having a recognizable tone throughout the game. But for animation, I didn’t want any frame limitations. I wanted to see them move smoothly but with impact. The limitations of the time were driven by the console’s capabilities, but I’m sure they would have drawn everything with many frames if they could. The idles and walks are iconic. Were they worked on first to help define the characters, or was it more attack moves? Yes, the idles and walk animations are the most important for me, because this is going to be the animations you see the most when playing. This is the pose your character always comes back to, and that needs to be very polished. But in my head, I already conceptualized the attacks before starting. 303

Interview: Ben Fiquet The flame-like visual effects are essential in selling the larger moves. Do you have any 2D effects references or recommendations? Everything that is hand-drawn animated is my point of reference. Whether it be the craziness of Hiroyuki Imaishi, Yoh Yoshinari, Yutaka Nakamura, or the subtleties in Western animation like Michel Gagné. There are many ways to represent animated VFX; I tried to be impactful yet very fluid. Streets of Rage 4’s characters are large enough to see facial expressions—what size were the characters on average, and are your sprites vector-based or just large enough to avoid pixelation when scaling in at the start? Everything is animated in Photoshop (bitmap), at twice the resolution it’s ultimately going to be shown. I reduce the size of the files by 50% before coloring, because it is unbearable for Photoshop after too many layers in 4K. My last Adam file, for example, is 74Mb on disk, but takes 8Gb of RAM when opened in Photoshop. There were beautiful uncoloured early line tests for Streets of Rage 4. What is the coloring process once you have laid down the line art, and did you do all prototyping in sketch form? In terms of production, since the coloring process wasn’t impacting the gameplay, it was easier for the programmers and game designers to start working as soon as possible with black-and-white rough animations. After the rough was cleaned-up, the coloring consisted of painting, frame by frame, all the colors and shadows behind it. But on top of that, I wanted to have some colored lines, so the line work is applied as overlay mode, but some information was lost in the process, so I had to do a second pass of clean-up with a black line on top of it all (I’ll never do that again). For good measure, I added some gradients and texture, which helps anchor the characters to the backgrounds and general art direction. What are any essential workflow tips you recommend? Speaking strictly about hand-drawn animation for games, I’d recommend to really plan ahead. Unfortunately, the technique doesn’t bode well with the general flow of game development, which is mostly about iterating again and again. I’d say that communicating intent and vision early in the process helps greatly. As a character designer, I’d recommend thinking about the possibilities of animation, like having a scarf that would nicely overlap in motion. As an animator, video games are a lot about silhouettes, to really communicate the gameplay at hand. This is why 2D fighting characters like Capcom or SNK always have their shoulders facing the camera, even if this is not a credible stance. And in general, everything is about contrast and rhythm, whether it’s be your shapes, animations or colors. You need to be able to easily communicate the important information without drowning in it. What is your work process for starting to animate a new character? Usually, I work at 30fps and set the first key poses at 4 frames. I tend to copy/paste my first pose and just select the head, arms or other parts and move it around and distort it until the motion is nice. I can quickly add some breakdowns because it helps me better understand the movement. Ultimately, I don’t make a clean-up layer; I tend to work pretty clean and erase a lot directly to have a finished line art from my rough. What might you consider the key differences between 2D and 3D animation? I don’t have much experience with 3D, but from my point of view, I think 2D is more going to the point. 3D allows you an infinite amount of tweaks and some information from the animations is processed by the computer, which can be losing the focus on what’s important. 304

Chapter 15

The Future Getting a Job Whether currently gainfully employed or attempting to break into the industry, it’s always good to have your “ducks in a row” with regards to future employment options, all of which should be online. Nowadays it’s seen as unusual for an artist to have no web presence, so not just your reel but your CV should all be online to be discoverable and point anyone to in an instant. Animators should always have a relatively up to date (no more than a few years old) demo reel of your best work because you never know when you might need to search for work in a bind. Even the most promising game projects, through no fault of your own, can get canceled on the whim of decisions made beyond your control. It’s rare to meet a developer who hasn’t had a least one canceled game under their belt, and game studio closures and layoffs are sadly all-too-common events in the industry. It is highly advisable to squirrel away a greatest-hits collection of videos or scenes throughout a project to a single location for easy access. Most studios will be fine with you making a reel once the work is in the public eye, so it can’t hurt for you to prep so as to have as little overlap as possible after ship. 305

Game Anim: Video Game Animation Explained If you’re especially lucky and the game is a hit, you may even ride on any marketing or media buzz the game generates for even more exposure!

The Game Animation Demo Reel So what makes a good game animation demo reel, and how do you stand out from the crowd when it’s likely that tens or hundreds of animators are flooding recruiters’ inboxes with applications? It makes sense to present the work you are most happy with creating so far, edited in a digestible form (only the best sequences from cinematics rather than the entire cutscene, for example). Below are tips on what shots to include to maximize your hiring potential into a game studio, and what to add to your reel to round it out if they’re not immediately biting. While a degree of similarity is expected, students should limit presenting only their coursework with the same characters as the rest of their class. There are a growing number of resources online for animators looking to experiment with different rigs so as to avoid having the exact same reel as scores of others. Including personal animations shows a level of creative fertility and enthusiasm that can help you stand out above your peers.

What to Include Every animator’s experience will be different, and the game projects they work on will likely not cover all the items below, so it’s worth creating a few shots specifically for the reel should you wish to cover all your bases. Here are a few suggestions that exemplify a lot of the varying tasks one might be expected to perform as a general game animator.

Navigating an assault course is an excellent way to include a sequence of moves typical of gameplay. 306

The Future • Dialogue scene: Showing acting, lip-sync to audio, and potentially camerawork. Should ideally include a change in emotion, and is best with two characters conversing. • Two-person interaction: Usually a combat encounter, potentially including extended contact like a throw. • Environmental navigation: Parkour, jumping over obstacles, climbing, and so on. The more interesting/complex an assault course, the better. • Multihit weapon combo: Illustrates an understanding of weight and fluidity. • Walk/run cycle(s): The video game equivalent of a still-life bowl of fruit. Everyone tries, but very hard to master. • Realistic mocap clean-up: If you are lucky enough to have access to raw unedited mocap, show a before and after to share your understanding of plusing and exaggerating mocap. • Extreme exaggeration: Keyframe an action from scratch that’s either hyper-exaggerated or cartoony to show your understanding of keyframe fundamentals. While it’s not possible to create one for every occasion (some suggest having multiple reels for different kinds of jobs), tailoring your reel for the type of job you desire is recommended. If you enjoy hyper-realism, aim for mocap or realistic keyframe. If you just love exaggerated colorful characters, then be sure to lean toward that. The most important thing is to have fun while making your reel. You should enjoy animating it, after all, and it will only show through!

A great way to be inspired is to reference other examples of game animation demo reels. Check out www​.gameanim​.com​/book for an ever-increasing gallery of professional game animation reels.

Editing Your Reel There are a few general rules when editing the work that goes into your reel for maximizing impact and holding the viewer’s attention. Ultimately, the quality of the animation is the single biggest factor, but when competing against animators of equal talent, the way your reel is put together will only help your chances of a callback. • Start with your best work: You want to instantly stand out from the crowd. Beginning with something eye-catching and unique will set the expectations for what is to follow. Interviewers will likely ask you which work you rate the highest and why, so showing what they might consider the best work later in a reel raises concerns. • Only show good work: A shorter reel of high quality beats a longer one with filler that gradually descends into unimpressive work, leaving a bad final impression at the end. 307

Game Anim: Video Game Animation Explained

Cutting your reel to music in a video sequencer. • No more than 2–3 minutes: Those involved with hiring will be viewing many reels in an attempt to whittle down the best, and so going over a certain length will be detrimental to their attention span. Potential employers will expect a student’s reel to be shorter still due to them having no work history, so don’t stress if that comes in around the 1-minute mark. Beyond those decisions, editing to your preferred music helps inspire and give energy and pacing to a reel—just avoid anything potentially irritating. Importantly, take time to render shots cleanly—the visual polish of your reel goes a long way in conveying how perfectionist you’ll be on the job.

The Reel Breakdown An important element to include with your reel is the shot breakdown, carefully detailing which element of each shot you worked on to avoid confusion as to your responsibilities on collaborative team efforts. This is best included with the video description or wherever you’re pointing prospective employers to, such as a page on your site. Both YouTube and Vimeo include a handy formatting trick to allow direct linking to timestamps in the video. Entering numerical timestamps as below will cause the time to form a link directly to the shot in question:

Being caught claiming work you did not do as your own is the worst blight an aspiring animator can have on their career, and the industry is small enough that it will carry on to future jobs. Yet it’s surprising how often it happens, and if not caught in the initial application process, it will become apparent as soon as the work produced on the job fails to match the reel’s quality.

308

The Future

Adding a timestamped breakdown to Vimeo or YouTube video descriptions provides direct links to shots. The last thing to remember about the game animation reel is that it needn’t be static. Keep updating it every time you improve or add a new sequence. Vimeo in particular allows you to overwrite previous versions, while YouTube will require a new video upload so previous links sent out will become obsolete (so it is recommended to only link potential employers to a page that contains the YouTube version embedded, such as on your LinkedIn profile or personal website) rather than the video directly.

Your Résumé Game animation and game development as a whole has become an ever more desirable career as years have progressed. While new job opportunities have certainly sprouted with the democratization of game development tools and digital distribution, the competition for those positions has increased in lockstep. The quality of students leaving schools gets better every year, so in order to stand out from the crowd, you need a stronger portfolio than ever. Concessions will be made by employers when comparing junior and senior portfolios and résumés (juniors cost less, after all), so don’t worry about your lack of AAA titles or the 5+ years of experience a job posting might be asking for—just get your résumé out there and get applying. Make sure your résumé is up to date on LinkedIn and prominently features a link to your reel. Your opening statement should be a condensed version of your cover letter, discussing high-level goals such as reasons for wanting to become a game animator, with your actual cover letter uniquely tailored to each application and why you want to work at that particular studio.

One of the most impressive pieces a prospective student can include in an application is evidence of working on a team and showing work 309

Game Anim: Video Game Animation Explained in a game engine. This allows an applicant to leapfrog over another of similar skill who has only touched the artistic side of game animation. Including game footage of real-time projects already illustrates a level of comfort and competence in a game engine that will allow you to more quickly grasp your role at a studio.

Be sure to include awards that you or your teams might have won, even if not animation-related, as the animator’s involvement in gameplay is always relevant. Steer clear of statements on your proficiency in different tools—all DCCs are similar enough that you can learn on the job, so there’s no need to hobble yourself at this stage if the studio uses a different DCC for example. Just like your demo reel, focus on experience relevant to games. While animators starting out will likely wish to detail their school-era experience, once employed in the field, it’s better to focus on the game projects you’ve shipped rather than detailing the work experience you used to help pay your way through college. That said, a varied life experience is a huge bonus to help you stand out from others, so be sure to include your pursuits beyond videogames or that year of traveling abroad. The least-imaginative video games are made by those who list only games as their hobby after all. LinkedIn, like Twitter, is a great way to connect with others in the field—just be wary of hounding recruiters or game developers in your keenness to learn. If someone does take the time to answer questions you might have or provide feedback on your reel, don’t immediately follow up with another round of questions. If you don’t wish to take advantage of LinkedIn, it’s still a good model to start from should you write your résumé elsewhere— (including an offline version with your cover letter is acceptable, just make sure to link to your online spaces too).

Your Web Presence It used to be the case that the only way to get your work out there was to build a website from scratch, with sections for your portfolio and résumé easily linked to in an emailed application. Not much has changed with that except that social media has essentially standardized the locations for content to be placed, foregoing the need for a personal site. It helps immeasurably to have a presence in these online venues. • LinkedIn: The de-facto location for résumés, allowing direct linking to your demo reel and clearly marked sections for all relevant materials, not to mention the ability to directly connect with professionals and recruiters. • YouTube: Gets your reel out to the widest audience possible while also allowing some nice markup of videos to make reel breakdowns interactive. • Vimeo: Better compression quality than YouTube; this should be the version you send out for applications. Allows password protection for sensitive material, as well as better grouping of videos for portfolio creation. 310

The Future

Maintain a consistent web presence for easier portfolio discovery. • ArtStation: The definitive location for video game art and animation portfolios, allowing you to combine all your work in one place. • Twitter: Take part in a growing game animation community with connections made and knowledge shared primarily on Twitter. Link back to your résumé and demo reel from here, as this is the most front-facing and accessible point of discovery. That’s not to say a portfolio site is useless; it’s just now best used as a landing page to point to all these other locations. Game animation applications these days still consist of a digitally enclosed cover letter and résumé, but once picked up by an interested studio it’s more often than not the links to your online résumé and reel that will be shared among those involved in hiring. As is likely the case for every field now, how you conduct and present yourself online will certainly play into a studio’s first impression—so play nice. For best visibility for hiring, avoid using multiple aliases across your various social media sites. Consistent image and name “branding” makes it easier for other artists and potential employers to find and follow you across the internet.

The Animation Test Once a reel has caught the attention of a studio, before they offer an interview, there is likely one last hurdle to overcome. Every studio worth its salt may request an animation test to illustrate your skill under a fixed time— allowing them to compare like vs like with other potential candidates, as well 311

Game Anim: Video Game Animation Explained as to highlight strengths and weaknesses to give some context and points of discussion in the following interview if successful. Every test will be tailored for the studio/position, but they’ll generally involve some of the elements already listed in the ideal reel contents earlier. A gameplay position will request some specific navigation or combat scenario, whereas a cinematic one will likely require acting and camerawork. Here are some tips for completing the test if required: • It can be difficult to make time if you’re already working a job or have other commitments, but some animators believe they are “above” testing and that their reel should be enough. Part of the reason for testing is to weed out attitudes like that, so make the time. • Set aside a weekend for the test, free of obligations. Studios will generally let you choose your start time, so make sure you are distraction free, and they will send you the relevant files and brief when ready. • Read the brief carefully. Failing to understand and follow it raises a red flag as to how you’ll work on the job, and oversights like this can waste time and money once working. • Prepare for the test if it’s in a DCC you’re unfamiliar with via a practice run beforehand with another scene. Once you do have the brief, be sure to collect references for the best quality possible—this should not contribute to the time limit. • As when animating games for real, work in blocking passes rather than straight-ahead animation. This ensures that you won’t come up short as time runs out (which would likely be a fail), with your remaining concern being the final quality only. • Lastly, don’t go over the limit. There’s a degree of trust involved in a remote test, but the team likely sees many of these so has a good idea of how much work can realistically be achieved in the allotted time.

Incoming Technologies By its very nature, video game development is a technology-based medium (though don’t confuse it with the tech industry—games are first and foremost in the business of entertainment, with technology used only as a tool). Great game experiences are made by people, not tools or technology— they only enable developers to better realize their ideas. Nevertheless, being aware of new technology coming around the corner can allow forward-thinking game animators to get a leg up on the competition. Knowing where you should be investing your efforts ahead of time makes it easier to see creative opportunities as new techniques arise. Here are some of the current and upcoming technological advancements that will make an impact on the medium in the near-to-medium future.

Virtual and Augmented Reality Virtual reality opens up two new areas related to game animation, one being the type of games we can hope to make with new levels of immersion, and 312

The Future the other being new ways of working. VR/AR limits us in some ways such as no longer having camera control or the option to cut, so cutscenes no longer work, as they traditionally remove all interaction. VR/AR does, however, allow us a higher degree of detail and focus on subtlety as intricate movements up close are even more impressive than the big-budget explosions we’ve traded on in the past. Similarly, character realization becomes much more important, as we feel we’re right there with characters who need to behave more believably than ever before. Perhaps just as exciting, however, is how it might change our daily workflow. Manipulating characters in a virtual 3D space like a stop-motion animator becomes an altogether different and more active experience from sitting at a desk and computer screen, though whether that is better remains to be seen. Eventually, it’ll be more likely that we instead employ some form of augmented reality as a screen replacement rather than truly immersing ourselves, with it enabling not only different methods of animating characters but also new opportunities for communication and remote working.

Affordable Motion Capture The affordability of depth-perceiving cameras such as webcams or those included with games consoles has given birth to middleware software that allows users to string them together to create rudimentary motion capture

Even phone cameras can record rudimentary facial mocap. (FaceCap, Courtesy of Niels Jansson.) 313

Game Anim: Video Game Animation Explained volumes. Phone cameras can already provide the same tech, but with built-in software to perform rudimentary facial performance capture. Motion capture based solely on video footage, despite its lower quality, can satisfy smaller studios and single-person teams on a budget—broadening the scope of modest projects from a character perspective. The most recent advancements incorporate facial tracking to complement body data.

Runtime Rigs Standardization of skeletons may soon be a thing of the past, with animation data being transferred cleanly onto a variety of skeletons with different sizes and proportions taken into account at runtime. This will allow future games to have a more diverse cast of characters with a variety of silhouettes, allowing animation creation to tend toward unique characterful work rather than homogenized movements.

Real-time control rig within Unreal Engine 4. (Copyright 2018 Epic Games, Inc.)

In-Game Workflow Once rigs are working in real time, animating them in the engine is the next logical step. Eschewing the entire DCC/export step will not only greatly reduce iteration time (and therefore increase quality), but animating characters on final meshes, physics and all, at upwards of 60 fps in the final environment while tweaking mid-game can only improve an animator’s

314

The Future productivity, not to mention making the experience much more accurate to the final result.

Procedural Movement Already aiding ragdoll and physics motion, advanced procedural physics systems convert an animator’s work to physical behaviors that can then be adjusted with a variety of parameters to fill in the blanks, foregoing the need for functional transition animations and allowing the animator to concentrate instead on a smaller set of character-defining animations. Procedural movement should correctly handle the shifting of weight and momentum, as well as more convincing movement between animations by replicating motor strengths of a character’s joints and muscles. Procedural movement has been around for some time but has yet to produce convincing results for more realistic characters due to its unwieldiness and difficulty to art-direct. Procedural motion may yet develop enough to provide a complete solution for moving characters, but for now it is a great complement to other systems already in use, or on the near horizon.

Machine Learned Motion Similar to both motion matching and procedural movement, machine learning utilizes the same motion-captured datasets to not just provide a bank of animations to blend between, but to “teach” the character how to walk, run, or whatever actions are fed into the system. “Machine Learning” is a universal approach to using the processing power of computers to precompute solutions (rather than compute in real time), taking the challenge of character movement offline so developers can teach characters movements for them to perform at runtime—even modifying the results to match variable parameters such as navigation of complex terrain. This process shifts the cost of movement from memory to computation and will likely be the next big pursuit after we once again begin hitting motionmatched animation memory limits. Machine learning is perhaps the broadest term for advances in AI and computation right now, as it’s more a technique than a technology, so the applications for it are only limited by the imaginations of those using it in-game development. Ultimately, what matters most when using any new technology is that it allows animators to focus on the foremost challenge of bringing game characters to life with results that players will enjoy. Like previous technological leaps such as the advent of 3D, and later, motion capture, it simply frees animators up so that they can instead focus efforts on the animations that really count without having to fear the ever-increasing number of actions required for fluid movement.

315

Game Anim: Video Game Animation Explained No technology on its own will ever make appealing characters that become player favorites; instead, this relies on the combined efforts of animators and their colleagues to create life where there was none so that players can explore and enjoy the future of this growing medium, inhabiting worlds and interacting with characters in ever deeper and more rewarding ways.

Remote Working Perhaps the biggest change to game development on the horizon is the developing attitude toward physical studios. Remote working has always been on the rise, especially among smaller studios as developers realize they don’t need to all be in the same building to make video games. Now in our (at time of writing) mid-pandemic world, working-from-home has moved from the exception to the norm, and despite our 100% digital output, game studios that haven’t adapted their technology and working practices will fall behind those already steeled against our new reality. For years, outsourcing studios have produced neatly-packaged nongameplay assets such as concept art, props, and cutscenes, but now some international teams are leading the way in fully developing successful games via a virtual studio setup. As ever, communication is key and is difficult even when everyone is all in the same location, but we’re rapidly reaching an inflection point where improving remote work technology is converging with traditional studios’ own idiosyncrasies and dysfunction, making remote work more efficient than ever. To add to this, hiring the best people in the world will be available to even the most modest studio willing to support a remote-working pipeline if it avoids months of paperwork and lawyers fees, as well as the cost of relocating employees and their families. It’s now no longer a matter of if, but when.

316

Index AAA games, see Triple-A games Accelerometer suits, 200 “Acceptable” quality, 91 Acting/action, 105, 178 overlapping, 33–35 pacing, 180 scenes cut on, 177 traversal, 82 Adaptive schedules, 236 Additive layers, 73–74 Advanced functionality, 10 Advanced game animation concepts, 76; see also Basic game animation concepts; Intermediate game animation concepts AI, 80–81 animated textures/shaders, 80 blend shapes, 78–79 full-body IK, 77–78 look-ats, 78 muscle simulation, 79–80 procedural motion and systems, 76–77 AI, see Artificial intelligence Alignment, 158–159 All-hands-on-deck approach, 95 All-important idle animation, 144–145 Alpha, 250 Animating/animation; see also Attack animations animated textures/shaders, 80 CG, 33 controls, 122 creation process, 32 critique, 242 director, 16 event editor in engine, 133 exploding character, 260 forward, 148–149 on joypad inputs, 58 memory, 131 modeling, 117 playing at wrong speed, 259 pose to pose over straight ahead, 90 quality, 119 in real time within Unreal Engine 4, 92 sharing, 123–124 slicing, 131 technical director, 17 test, 311–312 Animation polish hints and tips, 251 contact points, 252–253 finger touch points before and after polish, 253

foot-sliding, 251–252 idle to run displaying foot-sliding during blend, 252 incorrect weapon placement, 254 interpenetration, 254 memory management and compression, 255–257 momentum inconsistency, 253–254 original uncompressed curves vs. same curve postcompression, 256 popping, 252 targeted polishing, 255 Animation principles, 29 anticipation, 31–32 appeal, 37 arcs, 36–37 exaggeration, 38–39 follow-through and overlapping action, 33–35 secondary action, 37 slow-in and slow-out, 35–36 solid drawings, 40 squash and stretch, 30 staging, 31 straight ahead and pose to pose, 32–33 timing, 37–38 Animation team management, 234–235, 239 scheduling, 233–238 teamwork, 238 Animators, 2, 21, 171, 238 Another World (game), 115–116 Anticipation, 31–32, 159 Appeal, 37 Arcs, 36–37, 141–142 jumping, 154–155 Art, 18 Artificial intelligence (AI), 7, 80 decision-making, 81 pathfinding, 80–81 Artistry, 1 ArtStation, 311 Aseprite, 282 Assassin’s Creed (game), 5, 51, 263–265 Asset housekeeping, 135 Attach points, 122 Attack animations, 159; see also Animating/animation; Damage animations anticipation vs. response, 159–161 cutting up combos, 163–164 follow-through and overlapping limbs, 161–163 readability of swipes over stabs, 164 telegraphing, 161 visual feedback, 161

317

Index Attribute sliders, 122 Audio and effects, 20 Augmented reality, 312–313 Authenticity, 84–85 Auto-backups, 96–97 Auto-save configuration, 96–97 AZRI animated in pixel art, 271 AZRI sprite and the associated limited palette, 277 Background animation, 296–298 Balancing damage frame counts, 167 Bang for the buck, 54, 135 Basic game animation concepts, 57; see also Advanced game animation concepts; Intermediate game animation concepts collision movement, 59–62 cycles, 57–58 forward vs. inverse kinematics, 69–70 individual body-part offsets applied, 68 invisible collision cylinder moves with character, 60 linear actions, 58 skeletons, rigs, and exporting to game, 62–64 spline curves, 65–68 transitions, 59 Batching process, 128 Batman’s cloak physics, 76 Beta, 27, 250 Blades of Chaos, 230 Blandest games, 4 Blender, 8 Blending and cycling, 218–219 Blend shapes, 78–79 Blend timing, 133–134 Blocking from inside to out, 92–93 Body part damage, 164 Boiling, 291 Breakdowns, 67 Broken scene file, 259 Brütal Legend’s rapid pose-to-pose style fits the volume of work, 105 Button presses, 70 Call of Duty game series, 45 Call of Duty: Modern Warfare (game), 83 Camera, Character and Control (Three Cs), 140 Cameras, 7, 119, 140 angles, 175 camera-shake, 143–144 camerawork element, 104 cinematic, 172–176 depth, 200 Faux gameplay, 106 phone, 313–314 posing for game cameras, 48–49 reference, 9–10 straddle cuts with camera motion, 177 3D, 173

318

virtual, 211–212 Cannonball, 36 Capcom’s Street Fighter III series, 268 Cardinal sin of movement cycles, 146 Cartwright, Mariel (Animation Lead—Skullgirls), 169–170 Casting, 205–207 Catch-22 for students, 241 Celaya, Marie (facial animation supervisor), 193–195 Celeste’s small player character sprite size, 276 Center of balance, 50 Center of mass (COM), 50 CG animation, see Computer graphics animation Chahi, Eric (Creator—Another World), 115–116 Character design considerations, 289 Character(s), 18, 106, 140 animation, 4, 260 artists, 119, 238 character-based gameplay, 53 cinematic, 184 controlling, 153 customization, 119 location, 259–260 missing/unskinned vertices, 260 modeling, 117 model/rig asset, 260 movement systems, 2 position, 260 realization, 313 stretches, 260 Character setup, 118, 238 animation sharing, 123–124 modeling considerations, 118–120 rigging, 121–123 skinning, 120–121 Character/tile size, 275–277 Cinematic cameras, 172 DOF, 174–175 Five Cs of Cinematography, 175 FOV, 172–173 virtual cameraman, 175–176 Cinematic(s), 115, 238 animator, 15, 18, 24 characters, 184 and cutscenes, 6, 176–180, 200, 201 games, 31 planning cutscenes, 180–184 valuable commodities, 172 Cinematography, 175 Cirque du Soleil, 106 Classic Capcom arcade games’ memory allocation, 276 Cleaner method, 215 Climbing and mantling, 157 alignment, 158–159 collision considerations, 158 cut points and key poses, 158 height variations and metrics, 157–158 Close-Ups shots, 176

Index Collaboration, 238–239 Collision, 50 boxes, 295–296 considerations, 158 in-game, 260 movement, 59–62 Color-cycling, 299–300 Coloring, 287 COM, see Center of mass Combat, 159 element, 104 Combo attack animation, 163 Comic-book reference, 105 Composition of cinematography, 175 Compression, 131 Computer graphics animation (CG animation), 33 Conflicts and dependencies of scheduling, 237 Conservative approach, 235 Consistency in animation approach, 194 Contact points, 252–253 standardization, 165–166 Context, 50 distinction vs. homogeneity, 51 onscreen placement, 52 repetition, 52 Continuity, 175 Control, 140 Controlling characters, 153 Convergence, 186–187 Conveyor belt tiled animation, 297 Convincing cycles, 153 Core gameplay loop, 24 Core idles and movement, 222 “Coverage,” 213 Creative vision, 20 Creativity, 1 Critical path, 255 “Crossing the line, see 180 rule “Crouching while moving left” asset, 84 Cuphead, 39 Curve editor, 65, 68 Curve variations, 67 Custom rigging, 254 Cut points, 158 Cutscene, 175 acting vs. exposition, 178 action pacing, 180 avoiding full-shot ease-ins/outs, 179 avoiding gameplay actions in cutscenes, 178 avoiding overlapping game-critical information, 178 avoid player in opening shot, 177 cinematics and, 6 creation stages, 183–184 cut on action, 177 cuts and changes prediction, 235–236 cuts to teleport, 177

end cutscenes facing next goal, 177–178 180 rule, 176 place save points after, 180 planning, 180 previsualization, 181 scene prioritization, 183 storyboarding, 180–181 straddle cuts with camera motion, 177 tracking subjects naturally, 179–180 trigger cutscenes on player action, 177–178 workload, 181 Cutting, 175–176 cutting-edge methods, 81 cutting up combos, 163–164 Cycle breakers, 52, 145 Cycles, 57–58, 144 Cycling animations, 150 Damage animations, 164, see Animating/animation; Attack animations contact standardization, 165–166 directional and body-part damage, 164–165 impact beyond animation, 167 recovery timing and distances, 167 synced damages, 166 Dance Card, 221 Data loss, 96 configure auto-save, 96–97 save often, 97 set undo queue to max, 96 version control, 97 DCC, see Digital content creation Debug(ging), 249 best practices, 257–261 closing stages of project, 249–251 deadlines, 250 Decision-making, 81 Deformation helper bones, 123 Degree of interpenetration, 254 Demos, 25–26 Depth cameras, 200 Depth-of-field (DOF), 174–175 De-risking, 235 Design, 19–20 sense, 3 simplicity of, 53–54 Designers, 94 Destruction, 75, 214 Detroit: Become Human, 184, 193–195 Digital content creation (DCC), 63, 69, 74, 75, 89, 128, 132 animation tools, 135–136 software, 8 Direct by storytelling, 207 Directing actors, 207–208 Directional coverage animations, 151 Directional damage, 164 Directional starts and stops, 222

319

Index Distance/offset, 141 Distinction character, 51 DOF, see Depth-of-field DOOM, 53 “Downed” mechanic, 84 Dragon’s Lair arcade game, 115 Drawings, traditional, 272–273 Drouin, Alex (Animation Director), 263–265 Dynamic elements, 76 Ease-in/out tangent, see Slow-in and slow-out principle “Easter eggs,” 234 Editor screen layout, 281 Ekman, Paul, 189 Elegance, 53 bang for buck, 54 game animator’s duty, 53 sharing and standardization, 54–55 simplicity of design, 53–54 Engine export rules, 130 Environmental animation, 7, 15, 302 Environmental navigation, 307 Euler values, 64 Event tags, 132 Exaggeration, 38–39 extreme, 307 of movement, 52 Exponential growth, 149 Exporting, 129–131 animation memory and compression, 131 animation slicing, 131 engine export rules, 130 export data format, 130 to game, 62–64 Exposition, 178 External demos for conferences or trailers, 238 Eyelines, 184–185 Facial action coding system (FACS), 189–190 Facial animation, 4–5 eyelines, 184–185 eyes in, 184 eye vergence, 186–187 facial poses, 132 IK vs. FK eyes, 185–186 lip-sync, 188–192 saccades, 186 thought directions, 187–188 valuable commodities, 172 Fallout 3 concept boards, 182 Fast-in/slow-out, 67 Faux gameplay camera, 106 FBX file format, 130 Feather blend, 73 Feel, 41 inertia and momentum, 43

320

response, 42 visual feedback, 44 Field-of-view (FOV), 141, 172–173 File management, 124 file-naming conventions, 124–126 folder organization, 126–128 referencing, 128–129 File-naming conventions, 124–126 Fine-tuning, avoiding incrementally, 258 Finger touch points before and after polish, 253 Finite-state machines, see State machines Fiquet, Ben (Art Director & Animation: Streets of Rage 4), 303–304 Five Cs of Cinematography, 175–176 Fixing animation pops, 257 FK eyes, 185–186 Fleshing out an animation in color blocks, 284 Fluidity, 45 blending and transitions, 45–46 seamless cycles, 46–47 settling, 47–48 Folder organization, 126–128 Follow-through principle, 33–35, 161–163 Foot-sliding, 203, 251–252 issues, 60 Formal critique process, 242 Formalized apprenticeships, 241 Forward kinematics, 69–70 FOV, see Field-of-view Frame count(s), 290 for different strength attacks, 285 modular animation hybrid workflow, 292 Framerate, 290 Frames out of sync, 259 Friesen, Wallace, 189 Front-loading, 234 Full-body IK, 77–78 Full-body-part animations, 73 Full-shot ease-ins/outs, avoiding, 179 Game animation areas, 4 cameras, 7 cinematics and cutscenes, 6 environmental and prop animation, 7 facial animation, 4–5 NPCs, 7 player character animation, 4 technical animation, 6 Game animation demo reel, 305 breakdown, 308–309 editing, 307–308 to music in video sequencer, 308 navigating assault course, 306 remote working, 316 suggestions, 306 timestamped breakdown, 308–309

Index Game animation fundamentals context, 50–52 elegance, 53–55 feel, 42–44 fluidity, 45–48 readability, 48–50 Game animation workflow animate pose to pose over straight ahead, 90 animation, 94–95 ASAP, 91 blocking from inside to out, 92–93 data loss, 96–97 iteration key to quality, 92 pose-sharing libraries, 93 pose-to-pose approach, 90 preciousness, 89–90 reference gathering, 87–89 use prefab scenes, 95 Game animator, 3–4, 6, 15, 71, 87 animation director, 16 animation technical director, 17 cinematic animator, 15 gameplay animator, 15 lead animator, 15–16 principal animator, 16–17 technical animator, 17 Game development environment art, 18 audio and effects, 20 core team, 17–18 design, 19–20 game animator roles, 14–17 game development disciplines, 17 game pillars, 12–13 management, 20–21 programming, 18 public relations and marketing, 21 quality assurance, 20 right fit, 11 studio culture, 11–12 team dynamics, 14 team size, 13–14 team strengths, 12 video game project, 21–27 Game engine (Game Maker Studio), 284 Game/gaming engines, 8, 256 industry, 103 landscape, 103 pillars, 12–13 schedule, 26 studios, 88 Game Maker Studio, 284 Game Maker Studio 2, 284 Gameplay, 269

Gameplay animation, 139 attack animations, 159–164 climbing and mantling, 157–159 damage animations, 164–168 gameplay cameras, 140–141 ground movement, 144–145 jumping, 154–157 The Three Cs, 140 Gameplay animators, 15, 18, 24, 132 Gameplay cameras, 140–141 camera-shake, 143–144 settings and variables, 141–142 Gameplay mocap actors, 198 Gameplay requirements, 230 Gears of War, 31, 55 GEKKO’s concept, 246 Gimbal lock, 64 God of War (game), 229–231 Golden path, 26 Gold Master, 250–251 Google Docs, 16 Google searches, 88, 104 Great facial animation, 4–5 “Green-light” pitch project, 111 Grigsby, Mark (Animation Director—Call of Duty: Modern Warfare), 83 GRIS, 99–101 sprite sheet from, 278 Ground movement, 144 all-important idle animation, 144–145 animating forward vs. in place, 148–149 common walk/run cycle mistakes, 153–154 inclines, turning, and exponential growth, 149 seamlessly looping walk/run cycles, 145–147 starts, stops, and other transitions, 151–153 strafing, 149–150 “Hail-Mary” tips, 261 Hand-drawn 2D animation, 30 Hand-pose library, 94 Head-cams, 200 Head-glitching, 84 Head-mounted cameras (HMCs), 200 Height variations, 157–158 Highest-resolution characters, 189 Hiring, 241–242 Historical limitations, understanding, 274 character/tile size, 275–277 palettes, 277 screen resolution, 274–275 sprite sheets, 278 Hitbox vs. hurtbox, 295–296 Hit-frame, 132 “Hit-stop” technique, 44 HMCs, see Head-mounted cameras Homogeneity character, 51

321

Index Horizon: Zero Dawn’s robot dinosaurs, 112 Hurtbox, 295–296 Ideal situation, 21 IK, see Inverse Kinematics Illusion of Life: Disney Animation, The (Thomas and Johnston), 29 Immersion, 2 “In-betweeners,” 65 Incline/decline angle, 72 Inclines/inclination, 19, 149 In-engine work, 132 asset housekeeping, 135 blend timing, 133–134 event tags, 132 scripting, 134 test levels, 134–135 Inertia, 43 In-game characters, 184 workflow, 314–315 Intellectual property, 103 Intermediate game animation concepts, 70; see also Advanced game animation concepts; Basic game animation concepts additive layers, 73–74 parametric blending, 72–73 partial animations, 73 physics, dynamics, and ragdoll, 74–76 state machines, 70–72 Internships in game development, 241 Interpenetration, 254 Interruption, 132 Intersection of 3D elements, 254 Interview with Adrián Miguel, 99–101 with Alex Drouin, 263–265 with Ben Fiquet, 303–304 with Bruno Velazquez, 229–231 with Eric Chahi, 115–116 with Marie Celaya, 193–195 with Mariel Cartwright, 169–170 with Mark Grigsby, 83 with Masanobu Tanaka, 137–138 with Yoshitaka Shimizu, 245–247 Inverse Kinematics (IK), 64, 69–70 eyes, 185–186 IK/FK swapping, 122 Isometric sprites, 293–294 Jak and Daxter series, 30 animation state graph on paper, 71 Jerky movement, 259 Johnston, Ollie, 29 Joypad inputs, animations on, 58 Joystick angle, 70

322

Jumping, 58, 154 arcs, 154–155 landing, 156–157 take-off, 155–156 Keyframe animation team, 246 lip-sync, 189 Key poses, 158 Kinect (Microsoft), 201 Kratos Rules, 229 Lab Zero’s Skullgirls previews 2D animations, 91 Landing, 156–157 Last Guardian, 77, 137–138 Last of Us, The (game), 5 Lead animator, 15–16 Leadership, 239–241 League of Legends, 49 Level artist, 238 Level-of-detail system (LOD system), 119 Life experience, 4 Linear actions, 58 Linear tangent, 66 Lines of action, 48 LinkedIn, 310 Lion-dance control, 137 Lip-flapping, 192 Lip-sync, 188 creating quantities of facial animation, 191 facial action coding system, 189 phonemes, 188 shape transitions, 189 sharing facial animation, 189–191 troubleshooting, 191 LOD system, see Level-of-detail system Look-ats, 78 Machine learned motion, 315–316 Male/female humanoids, 124 Management, 20–21 Marketing, public relations and, 21 Mark Ferrari’s color-cycling, 299 Masculine character, 124 Master control, 148 Max, 8 Maya, 8, 64 painting skin weights in, 121 Time Editor, 215, 218 Medium, 1–2 Mega Man’s minimalist 4-frame walk cycle, 290 Memorable characters, 140 Memory management and compression, 255–257 Mentorship, 241 Metrics, 157–158 Microphones, 200

Index Miguel, Adrián (Animation Lead—GRIS), 99–101 Mini-deadlines, 237 Missing scene file, 259 Mocap see Motion capture Modular animation, 273–274, 291–292 Spine, 283 Momentum, 43 inconsistency, 253–254 Monument Valley, 22 MotionBuilder, 203 Story Tool, 215, 218 Motion capture (Mocap), 197 affordability, 313 clip cut up for speed in MotionBuilder’s story editor, 215 methods, 199–202 mirroring, 213 mocap/animation pass, 183 motion matching, 220 needs, 198–199 pipeline, 202 planning a motion-matching mocap shoot, 221 previz, 205 props and sets, 208–212 retargeting, 202–203 shoot planning, 203–204 takes, 212–213 working with, 213–219 working with actors, 205–208 works, 199 Motion matching, 220 Motion-matching mocap shoot, planning, 221 Motion-matching shot list, 221 core idles and movement, 222 directional starts and stops, 222 naming convention, 222 pivot turns, 223 repositions, 225 snaking, 226 strafe diamonds and squares, 224 strafe direction changes, 223 strafe starts and stops, 224 turning circles, 226 turn on the spot, 225 wild takes, 227 Motion(s), 3, 49 matching, 220 physics, 74–76 style, 104 Motion tweening, see Modular animation Mouth shapes in lip-sync, 192 Movies, 106 Multihit weapon combo, 307 Multiple actions, 73 Multi-team studios, 103 Muscle simulation, 79–80

Naming convention, 222 Naming error/mismatch, 259 Narrow camera shot, 172–173 Navigation element, 104 “Nearest-neighbor” scaling, 272 New technologies, 312, 315 affordable motion capture, 313 in-game workflow, 314–315 machine learned motion, 315–316 procedural movement, 315 real-time control rig within Unreal Engine 4, 314 runtime rigs, 314 VR, 313 “Non-destructive” approach, 214 Nonplayer characters (NPCs), 6–7 Non-video game, 105 Normal maps, 80 Notepad, 10 NPCs, see Nonplayer characters Offset poses, 216 hiding offset pose deltas, 217 180 rule, 176 One-on-one fighting game, 49 Onion-skinning, 292–293 Onscreen placement, 52 Optical marker-based methods, 199, 202 Ordering/grouping shots, 204 Oscillating looping motion, 148 Outline clean-up, 286 Outsourcing, 243 Outsourcing studios, 316 Overlapping action, 33–35 avoiding overlapping game-critical information, 178 limbs, 161–163 Overwriting, 178 Paid internships, 241 Painting skin weights in Maya, 121 Palettes, 277 Palette-swapping, 277 Parallax scrolling, 298–299 Parametric blending, 72–73 Partial animations, 73 Partial body-part animations, 73–74 Pathfinding, 80–81 Path points, 82 Path priority calculations, 82 Performance capture, 200–202 Phone cameras, 313–314 Phonemes, 188 Photoshop, 170, 282 Photoshop “actions,” 272 Physical simulations, 74–76 Physics motion, 74–76

323

Index Picker, 122 Pistol strafing system, 209 Pitching game, 111–113 Pivot, 142 Pivot turns, 223 Pixel art animation, 271 Aseprite, 282 Pixel-perfect platforming gameplay, 279 Pixel-perfect scaling, 275 Planning cutscenes, 180–184 Player action, trigger cutscenes on, 177–178 Player character animation, 4 “Player facing,” 26 Player vs. environment (PVE), 161 Player-vs-player (PVP) combat, 161, 295 Points-based systems, 81 Polish(ing), 183, 249 animation polish hints and tips, 251–257 closing stages of project, 249–251 deadlines, 250 Popping, 252 Pose/animation library, 136 Pose exaggeration, 215–216 Pose-sharing libraries, 93 Pose-to-pose approach, 32–33, 90 Posing for game cameras, 48–49 Positional dampening, 142 Post process, 184 Prefab scenes, 95 Pre-production pitching game, 111–113 prototyping, 110–111 style references, 104–105 video game production process, 103 Previsualization (Previz), 106 animation, 183 digestible, 106 gameplay mock-ups, 106 of mocap shoot, 204–205 target footage, 108–110 Prince of Persia: Sands of Time (game), 46, 263–265 Principal animator, 16–17 Procedural motion and systems, 76–77 Procedural movement, 315 Programming, 18 Progression stoppers, 27 Prop(s), 208–209 animation, 7 manipulation in cinematic scenes, 208 recording, 209 weapon prop armory, 208 Prototypes, 107 Prototyping game systems, 110–111 “Proxy” versions, 90 Pseudostrabismus, 187 Public relations and marketing, 21

324

Punches, 58 Puzzle game, 106 PVE, see Player vs. environment Python, 17 Quality assurance, 20 cinematic cutscenes, 172 prioritization, 234–235 QuickTime (Apple), 10, 89 “Rack-focus,” 174 Ragdoll, 74–76 Rams, Dieter (industrial designer), 53 “Range-of-motion” animation (ROM animation), 120 Readability, 48 collision and COM/balance, 50 posing for game cameras, 48–49 silhouettes, 49 of swipes over stabs, 164 Realism, 106 Realistic mocap clean-up, 307 Real-life observation, 88 Real-time cutscenes, 181 face capture in Unreal Engine 4 via head-mounted camera, 201 feedback of sets and facial acting on HellBlade, 212 IK, 70 retargeting animations, 123–124 Recovery, 159, 162 timing and distances, 167 Red Dead Redemption, 75 “Red-flag” warning signs, 236 Reference/referencing, 128–129 camera, 9–10 gathering, 87–89 Rehearsals, 204 Release Candidates, 250–251 Remote working, 316 Repeat cycles in DCCs, 110 Repetition, 52 Repositions, 227 Response, 42, 160 Retiming, 214–215 Rigging, 121–123 Rigid approach, 166 Rigs, 62–64 file referenced inside multiple animation files, 129 ROM animation, see “Range-of-motion” animation Root (collision) control with autopositioning, 122 Rotational acceleration/deceleration, 142 Rotation speed, 142 Rotoscoping, 273 Roughing in animation, 284 Running motion, 252

Index Runtime rigs, 313 Ryse: Son of Rome (game), 174 Saccades, 186 Sands of Time, The, 263–265 Save/load/copy/paste animation tool, 136 Save/load/copy/paste pose tool, 135 Scene complexity, 181 Scene prioritization, 183 Scheduling, 209 adaptive schedules, 236 conflicts and dependencies, 237 de-risking, 235 front-loading, 234 milestones, 237–238 moving asset quantities, 236 predicting cuts and changes, 235–236 prioritizing quality, 234–235 Screen resolution, 274–275 Scripting, 134 Seamless cycles, 46–47 looping walk/run cycles, 145–147 Secondary action, 37 Selective Outlining, 287–288 Selout, see Selective Outlining Set(s) building, 209–211 LA Noire’s set matches, 211 mocap, 210 Settling, 47–48 Set undo queue to max, 96 Shadow of the Colossus (game), 38, 138 Shape transitions, 189 Sharing animation, 54–55 facial animation, 189–191 Shimizu, Yoshitaka (NPC Animation Lead), 245–247 Shishimai method, 137 Shot list, 203 Shovel Knight (retro case study), 278–280 Silhouettes, 49, 271 “Silver-bullet,” 198 Simple static/standing actions, 256 Simplicity of design, 53–54 Single animation, 257 Single best approach to prioritization, 235 Single point of contact, 208 Skeletons, 62–64 Skinned-to export skeleton, 122 Skinning, 120–121 “Skip” option, 42 Skullgirls, 169–170 Slow-in and slow-out principle, 35–36, 65–66 Snaking, 228 Snap-over-time tool, 136

Solid drawings, 40 Sonic the Hedgehog, 43 Sound effects, 132 Speed element, 104 Spine, 283, 292 Spline curves, 65–68 Sprites, 267 Sprite sheet editor (Texture Packer), 283 Sprite sheets, 278 Squash and stretch, 30 Staging principle, 31, 48, 178 Standard cylinder/pill-shaped collision, 165 Standardization, 123 of animation, 54–55 apprenticeship training, 241 Standout personality, 52 State graphs, see State machines State machines, 70–72 Stop-motion animation, traditional medium of, 62 Stop-motion animator, 313 Storytelling method, 172 Straddle cuts with camera motion, 177 Strafe diamonds and squares, 224 Strafe direction changes, 223 Strafe starts and stops, 224 Strafing systems, 149–150 Straight ahead animation, 32, 90 Streets of Rage 4 (game), 295, 303–304 modern case study, 300–302 Studio culture, 11–12 Style, 105 character, 106 comparisons, 106 realism vs. stylized, 106 references, 104–105 Stylized pond, 297 Sub-pixel animation, 288–289 Swipes over stabs, 164 Sword-swings, 58 Synced damages, 66 Take-off, 155–156 Tanaka, Masanobu (Animation Director—The Last Guardian), 137 Targeted polishing, 255 Target footage, 23, 108–110 TD, see Technical director Team dynamics, 14 size, 13–14 strengths, 12 Team Fortress 2 (game), 50 Teamwork, 3, 238 animation critique, 242 collaboration, 238–239 hiring, 241–242 leadership, 239–241

325

Index mentorship, 241 outsourcing, 243 realistic schedule, buffered for requests and unforeseen issues, 240 Technical animation, 6 character setup, 118–124 DCC animation tools, 135–136 exporting, 129–131 file management, 124–129 in-engine work, 132–135 Technical animators, 17, 24, 119 Technical artists, 18 Technical director (TD), 17 Technology-based medium, 312 Telegraphing, 32, 161 Teleport, cuts to, 177 Television, 106 Temporary blocked-in animation, 104 Test levels, 134–135 Test/modify elements, 257 Texture Packer, 283 Thomas, Frank, 29 Three-dimension (3D) animated cartoons, 41 animation, 38, 40, 67, 69 cameras, 173 character animation, 62–63 DCCs, 120 games, 48, 140, 268 Tile size, 275–277 “Tilt-shift” photography, 175 Timeline editor, 68 Time-warp curve, 214 tool, 136 Timing, 37–38, 214 in lip-sync, 191 Trade-offs, 19 Traditional drawings, 272–273 Traditional linear animation, 3 Trajectory tool, 136 Transitions, 59 in lip-sync, 192 “Triage,” 27 Triple-A games (AAA games), 14, 103 Troubleshooting, 258–261 lip-sync, 191 Turning, 149 Turning circles, 226 Turn on the spot, 225 Twitter, 311 Two-dimension (2D) HD animation, 169 polygonal animation, 116 traditional 2D animation, 90 visual effects animation, 299–300

326

workflow tips, 284–286 2D and pixel art game animation concepts, 286 background animation, 296–298 character design considerations, 289 coloring, 287 frame count, 290–291 framerate, 290 hitbox vs. hurtbox, 295–296 isometric sprites, 293–294 onion-skinning, 292–293 outline clean-up, 286 parallax scrolling, 298–299 sub-pixel animation, 288–289 2D visual effects animation, 299–300 2D animations, 100, 270 cons, 270 pros, 270 2D art all-rounder (Photoshop), 282, 283 2D game animation, history of, 267–269 2D game animation DCCs and engines, 280 editor screen layout, 281 game engine: Game Maker Studio, 284 modular animation: Spine, 283 pixel art animation: Aseprite, 282 sprite sheet editor: Texture Packer, 283 2D art all-rounder: Photoshop, 282 2D production approaches, 271 modular animation/motion tweening, 273–274 pixel art animation, 271 rotoscoping, 273 traditional drawings, 272–273 Two-person interaction, 307 UFC series, 78 UI, see User interface Uncharted: Drake’s Fortune (game), 52 Uncharted 4: A Thief’s End (game), 47 Uninterrupted animation, 42 Unique gameplay features, 104 Unplayable cutscenes, 176 Unskinned character modeled in “T-pose,” 118 User interface (UI), 18 UV animation, 80 Velazquez, Bruno (Animation Director—God of War), 229 Version control, 97 comments, 257–258 Vertex weighting process, 120 Vestibulo-ocular reflex, 185 VFX, see Visual effects Video-based facial capture, 191 Video conferencing technology, 14 Video game animation, 117 Video game animator, 1 accepting nature of medium, 3–4 areas of game animation, 4

Index artistry and creativity, 1–2 DCC software, 8 design sense, 3 game engines, 8 life experience, 4 notepad, 10 reference camera, 9–10 teamwork, 3 technical ability, 2 video playback software, 10 Video game development, 312 Video game production process, 103 Video game project, 21 conception, 21–23 post-release, 27 pre-production, 24–25 production, 25–26 shipping, 26–27 Video playback software, 10 Video reference, 87 Vimeo, 308–310 Virtual cameraman, 175–176 Virtual cameras, 211–212

Virtual reality (VR), 313–314 Visceral impact of damage animations, 167 Visual effects (VFX), 18, 41, 132, 184, 299–300 Visual(s), 139; see also Previsualization feedback, 44, 161 nastiness, 73 VLC, 89 Volume-aiding helper joints, 121 VR, see Virtual reality Walk/run cycle(s), 307 Webcams, 313 Web presence, 310–311 Well-rounded game animator, 6 Wide camera shot, 173 Wild takes, 227 Workload, 181 Wrong file address, 259 Yacht Club Games’ Shovel Knight, 278 YouTube, 310 Zero pose, 217

327