244 109 3MB
English Pages xxv, 427 s.: illustrations [721] Year 2006
Linux Programming by Example
Arnold Robbins
PRENTICE HALL Professional Technical Reference Upper Saddle River, NJ 07458 www.phptr.com
Prentice Hall Open Source Software Development Series Arnold Robbins, Series Editor “Real world code from real world applications” Open Source technology has revolutionized the computing world. Many large-scale projects are in production use worldwide, such as Apache, MySQL, and Postgres, with programmers writing applications in a variety of languages including Perl, Python, and PHP. These technologies are in use on many different systems, ranging from proprietary systems, to Linux systems, to traditional UNIX systems, to mainframes. The Prentice Hall Open Source Software Development Series is designed to bring you the best of these Open Source technologies. Not only will you learn how to use them for your projects, but you will learn from them. By seeing real code from real applications, you will learn the best practices of Open Source developers the world over.
Titles currently in the series include: Linux® Debugging and Performance Tuning: Tips and Techniques Steve Best 0131492470, Paper, 10/14/2005 The book is not only a high-level strategy guide but also a book that combines strategy with hands-on debugging sessions and performance tuning tools and techniques. Linux Programming by Example: The Fundamentals Arnold Robbins 0131429647, Paper, 4/12/2004 Gradually, one step at a time, Robbins teaches both high level principles and “under the hood” techniques. This book will help the reader master the fundamentals needed to build serious Linux software. The Linux® Kernel Primer: A Top-Down Approach for x86 and PowerPC Architectures Claudia Salzberg, Gordon Fischer, Steven Smolski 0131181637, Paper, 9/21/2005 A comprehensive view of the Linux Kernel is presented in a top down approach—the big picture first with a clear view of all components, how they interrelate, and where the hardware/software separation exists. The coverage of both the x86 and the PowerPC is unique to this book.
© 2004 Pearson Education, Inc. Publishing as Prentice Hall Professional Technical Reference Upper Saddle River, New Jersey 07458 Prentice Hall PTR offers discounts on this book when ordered in quantity for bulk purchases or special sales. For more information, please contact: U.S. Corporate and Government Sales, 1-800-382-3419, [email protected]. For sales outside of the United States, please contact: International Sales, 1-317-581-3793, [email protected]. Portions of Chapter 1, Copyright © 1994 Arnold David Robbins, first appeared in an article in Issue 16 of Linux Journal, reprinted by permission. Portions of the documentation for Valgrind, Copyright © 2003 Julian Seward, reprinted by permission. Portions of the documentation for the DBUG library, by Fred N. Fish, reprinted by permission. The GNU programs in this book are Copyright © 1985-2003, Free Software Foundation, Inc.. The full list of files and copyright dates is provided in the Preface. Each program is “free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version.” Appendix C of this book provides the text of the GNU General Public License. All V7 Unix code and documentation are Copyright © Caldera International Inc. 2001-2002. All rights reserved. They are reprinted here under the terms of the Caldera Ancient UNIX License, which is reproduced in full in Appendix B. Cover image courtesy of Parks Sabers, Inc. The Arc-Wave(tm) saber is manufactured by Parks Sabers, Inc., Copyright © 2001, www.parksabers.com. Parks Sabers is not associated with any Lucasfilm Ltd. property, film, or franchise. The programs and applications presented in this book have been included for their instructional value. They have been tested with care but are not guaranteed for any particular purpose. The publisher does not offer any warranties or representations, nor does it accept any liabilities with respect to the programs or applications. UNIX is a registered trademark of The Open Group in the United States and other countries. Microsoft, MS, and MS-DOS are registered trademarks, and Windows is a trademark of Microsoft Corporation in the United States and other countries. Linux is a registered trademark of Linux Torvalds. All company and product names mentioned herein are the trademarks or registered trademarks of their respective owners. This material may be distributed only subject to the terms and conditions set forth in the Open Publication License, v1.0 or later (the latest version is presently available at http://www.opencontent.org/openpub/), with License Option B. Printed in the United States of America ISBN 0-13-142964-7 Text printed on recycled paper First printing Pearson Education LTD. Pearson Education Australia PTY, Limited Pearson Education Singapore, Pte. Ltd. Pearson Education North Asia Ltd. Pearson Education Canada, Ltd. Pearson Educación de Mexico, S.A. de C.V. Pearson Education—Japan Pearson Education Malaysia, Ptd. Ltd.
To my wife Miriam, and my children, Chana, Rivka, Nachum, and Malka.
Contents
Preface ...........................................................................................................
xvii
PART I
Files and Users ............................................
1
Chapter 1
Introduction ..............................................................................
3
1.1 The Linux/Unix File Model ............................................................................. 1.1.1 Files and Permissions ................................................................................ 1.1.2 Directories and Filenames ........................................................................ 1.1.3 Executable Files ........................................................................................ 1.1.4 Devices ..................................................................................................... 1.2 The Linux/Unix Process Model ........................................................................ 1.2.1 Pipes: Hooking Processes Together .......................................................... 1.3 Standard C vs. Original C ................................................................................ 1.4 Why GNU Programs Are Better ....................................................................... 1.4.1 Program Design ....................................................................................... 1.4.2 Program Behavior ..................................................................................... 1.4.3 C Code Programming .............................................................................. 1.4.4 Things That Make a GNU Program Better .............................................. 1.4.5 Parting Thoughts about the “GNU Coding Standards” ........................... 1.5 Portability Revisited ......................................................................................... 1.6 Suggested Reading ............................................................................................ 1.7 Summary .......................................................................................................... Exercises ......................................................................................................................
4 4 6 7 9 10 12 12 14 15 16 16 17 19 19 20 21 22
Chapter 2
Arguments, Options, and the Environment ................................
23
2.1 Option and Argument Conventions ................................................................. 2.1.1 POSIX Conventions ................................................................................. 2.1.2 GNU Long Options .................................................................................
24 25 27 v
vi
Contents
2.2 Basic Command-Line Processing ...................................................................... 2.2.1 The V7 echo Program ............................................................................ 2.3 Option Parsing: getopt() and getopt_long() ....................................... 2.3.1 Single-Letter Options ............................................................................... 2.3.2 GNU getopt() and Option Ordering .................................................. 2.3.3 Long Options ........................................................................................... 2.3.3.1 Long Options Table ........................................................................... 2.3.3.2 Long Options, POSIX Style ............................................................... 2.3.3.3 getopt_long() Return Value Summary ........................................ 2.3.3.4 GNU getopt() or getopt_long() in User Programs ................ 2.4 The Environment ............................................................................................. 2.4.1 Environment Management Functions ...................................................... 2.4.2 The Entire Environment: environ ........................................................ 2.4.3 GNU env ................................................................................................ 2.5 Summary .......................................................................................................... Exercises ......................................................................................................................
28 29 30 30 33 34 34 37 37 39 40 41 43 44 49 50
Chapter 3
User-Level Memory Management ...............................................
51
3.1 Linux/Unix Address Space ................................................................................ 3.2 Memory Allocation .......................................................................................... 3.2.1 Library Calls: malloc(), calloc(), realloc(), free() .............. 3.2.1.1 Examining C Language Details ........................................................... 3.2.1.2 Initially Allocating Memory: malloc() ........................................... 3.2.1.3 Releasing Memory: free() .............................................................. 3.2.1.4 Changing Size: realloc() .............................................................. 3.2.1.5 Allocating and Zero-filling: calloc() .............................................. 3.2.1.6 Summarizing from the GNU Coding Standards .................................. 3.2.1.7 Using Private Allocators ...................................................................... 3.2.1.8 Example: Reading Arbitrarily Long Lines ........................................... 3.2.1.9 GLIBC Only: Reading Entire Lines: getline() and getdelim() . 3.2.2 String Copying: strdup() ..................................................................... 3.2.3 System Calls: brk() and sbrk() .......................................................... 3.2.4 Lazy Programmer Calls: alloca() ........................................................ 3.2.5 Address Space Examination ...................................................................... 3.3 Summary .......................................................................................................... Exercises ......................................................................................................................
52 56 56 57 58 60 62 65 66 67 67 73 74 75 76 78 80 81
Contents
Chapter 4
vii
Files and File I/O .......................................................................
83
4.1 Introducing the Linux/Unix I/O Model ........................................................... 4.2 Presenting a Basic Program Structure ............................................................... 4.3 Determining What Went Wrong ..................................................................... Values for errno ..................................................................................... 4.3.1 4.3.2 Error Message Style .................................................................................. 4.4 Doing Input and Output ................................................................................. 4.4.1 Understanding File Descriptors ................................................................ 4.4.2 Opening and Closing Files ....................................................................... 4.4.2.1 Mapping FILE * Variables to File Descriptors ................................. 4.4.2.2 Closing All Open Files ........................................................................ 4.4.3 Reading and Writing ................................................................................ Example: Unix cat .................................................................................. 4.4.4 4.5 Random Access: Moving Around within a File ................................................. 4.6 Creating Files ................................................................................................... 4.6.1 Specifying Initial File Permissions ............................................................ 4.6.2 Creating Files with creat() .................................................................. 4.6.3 Revisiting open() ................................................................................... 4.7 Forcing Data to Disk ........................................................................................ 4.8 Setting File Length ........................................................................................... 4.9 Summary .......................................................................................................... Exercises ......................................................................................................................
84 84 86 87 90 91 92 93 95 96 96 99 102 106 106 109 110 113 114 115 115
Chapter 5
Directories and File Metadata ....................................................
117
5.1 Considering Directory Contents ....................................................................... 5.1.1 Definitions ............................................................................................... 5.1.2 Directory Contents ................................................................................... 5.1.3 Hard Links ............................................................................................... 5.1.3.1 The GNU link Program .................................................................. 5.1.3.2 Dot and Dot-Dot ............................................................................... 5.1.4 File Renaming .......................................................................................... 5.1.5 File Removal ............................................................................................ 5.1.5.1 Removing Open Files ......................................................................... 5.1.5.2 Using ISO C: remove() .................................................................. 5.1.6 Symbolic Links .........................................................................................
118 118 120 122 123 125 125 126 127 127 128
viii
Contents
5.2 Creating and Removing Directories .................................................................. 5.3 Reading Directories .......................................................................................... 5.3.1 Basic Directory Reading ........................................................................... 5.3.1.1 Portability Considerations .................................................................. 5.3.1.2 Linux and BSD Directory Entries ....................................................... 5.3.2 BSD Directory Positioning Functions ...................................................... 5.4 Obtaining Information about Files ................................................................... 5.4.1 Linux File Types ....................................................................................... 5.4.2 Retrieving File Information ...................................................................... 5.4.3 Linux Only: Specifying Higher-Precision File Times ................................ 5.4.4 Determining File Type ............................................................................. 5.4.4.1 Device Information ............................................................................ 5.4.4.2 The V7 cat Revisited ........................................................................ 5.4.5 Working with Symbolic Links .................................................................. 5.5 Changing Ownership, Permission, and Modification Times ............................. 5.5.1 Changing File Ownership: chown(), fchown(), and lchown() ....... 5.5.2 Changing Permissions: chmod() and fchmod() .................................. 5.5.3 Changing Timestamps: utime() ............................................................ 5.5.3.1 Faking utime(file, NULL) ......................................................... 5.5.4 Using fchown() and fchmod() for Security ....................................... 5.6 Summary .......................................................................................................... Exercises ......................................................................................................................
130 132 133 136 137 138 139 139 141 143 144 147 150 151 155 155 156 157 159 161 162 163
Chapter 6
General Library Interfaces — Part 1 ............................................
165
6.1 Times and Dates .............................................................................................. Retrieving the Current Time: time() and difftime() ...................... 6.1.1 6.1.2 Breaking Down Times: gmtime() and localtime() ......................... 6.1.3 Formatting Dates and Times .................................................................... 6.1.3.1 Simple Time Formatting: asctime() and ctime() ...................... 6.1.3.2 Complex Time Formatting: strftime() ........................................ 6.1.4 Converting a Broken-Down Time to a time_t ...................................... 6.1.5 Getting Time-Zone Information .............................................................. 6.1.5.1 BSD Systems Gotcha: timezone(), Not timezone ...................... 6.2 Sorting and Searching Functions ...................................................................... 6.2.1 Sorting: qsort() ...................................................................................
166 167 168 170 170 171 176 178 179 181 181
Contents
ix
6.2.1.1 Example: Sorting Employees .............................................................. 6.2.1.2 Example: Sorting Directory Contents ................................................. Binary Searching: bsearch() ................................................................ 6.2.2 6.3 User and Group Names .................................................................................... 6.3.1 User Database .......................................................................................... 6.3.2 Group Database ....................................................................................... 6.4 Terminals: isatty() ..................................................................................... 6.5 Suggested Reading ............................................................................................ 6.6 Summary .......................................................................................................... Exercises ......................................................................................................................
183 188 191 195 196 199 202 203 203 205
Chapter 7
Putting It All Together: ls .........................................................
207
7.1 V7 ls Options ................................................................................................. 7.2 V7 ls Code ..................................................................................................... 7.3 Summary .......................................................................................................... Exercises ......................................................................................................................
208 209 225 226
Chapter 8
Filesystems and Directory Walks ................................................
227
8.1 Mounting and Unmounting Filesystems .......................................................... 8.1.1 Reviewing the Background ....................................................................... 8.1.2 Looking at Different Filesystem Types ..................................................... Mounting Filesystems: mount ................................................................. 8.1.3 8.1.4 Unmounting Filesystems: umount .......................................................... 8.2 Files for Filesystem Administration ................................................................... 8.2.1 Using Mount Options .............................................................................. 8.2.2 Working with Mounted Filesystems: getmntent() .............................. 8.3 Retrieving Per-Filesystem Information ............................................................. 8.3.1 POSIX Style: statvfs() and fstatvfs() ........................................ 8.3.2 Linux Style: statfs() and fstatfs() ............................................... 8.4 Moving Around in the File Hierarchy .............................................................. 8.4.1 Changing Directory: chdir() and fchdir() ...................................... 8.4.2 Getting the Current Directory: getcwd() ............................................. 8.4.3 Walking a Hierarchy: nftw() ................................................................. 8.4.3.1 The nftw() Interface ....................................................................... 8.4.3.2 The nftw() Callback Function ........................................................
228 228 232 236 237 238 239 241 244 244 252 256 256 258 260 261 263
x
Contents
8.5 Walking a File Tree: GNU du ......................................................................... 8.6 Changing the Root Directory: chroot() ....................................................... 8.7 Summary .......................................................................................................... Exercises ......................................................................................................................
269 276 277 278
PART II Processes, IPC, and Internationalization ....... 281 Chapter 9
Process Management and Pipes .................................................
283
9.1 Process Creation and Management ................................................................... 9.1.1 Creating a Process: fork() ..................................................................... 9.1.1.1 After the fork(): Shared and Distinct Attributes ............................. 9.1.1.2 File Descriptor Sharing ....................................................................... 9.1.1.3 File Descriptor Sharing and close() ............................................... 9.1.2 Identifying a Process: getpid() and getppid() ................................ 9.1.3 Setting Process Priority: nice() ............................................................. 9.1.3.1 POSIX vs. Reality ............................................................................... 9.1.4 Starting New Programs: The exec() Family .......................................... 9.1.4.1 The execve() System Call .............................................................. 9.1.4.2 Wrapper Functions: execl() et al. .................................................. 9.1.4.3 Program Names and argv[0] .......................................................... 9.1.4.4 Attributes Inherited across exec() ................................................... 9.1.5 Terminating a Process .............................................................................. 9.1.5.1 Defining Process Exit Status ............................................................... 9.1.5.2 Returning from main() .................................................................... 9.1.5.3 Exiting Functions ............................................................................... 9.1.6 Recovering a Child’s Exit Status ............................................................... 9.1.6.1 Using POSIX Functions: wait() and waitpid() ......................... 9.1.6.2 Using BSD Functions: wait3() and wait4() ............................... 9.2 Process Groups ................................................................................................. 9.2.1 Job Control Overview .............................................................................. 9.2.2 Process Group Identification: getpgrp() and getpgid() .................. 9.2.3 Process Group Setting: setpgid() and setpgrp() ............................ 9.3 Basic Interprocess Communication: Pipes and FIFOs ...................................... 9.3.1 Pipes ......................................................................................................... 9.3.1.1 Creating Pipes .................................................................................... 9.3.1.2 Pipe Buffering ....................................................................................
284 284 285 286 288 289 291 293 293 294 295 297 298 300 300 301 302 305 306 310 312 312 314 314 315 315 316 318
Contents
xi
9.3.2 FIFOs ....................................................................................................... 9.4 File Descriptor Management ............................................................................ 9.4.1 Duplicating Open Files: dup() and dup2() ......................................... 9.4.2 Creating Nonlinear Pipelines: /dev/fd/XX ........................................... 9.4.3 Managing File Attributes: fcntl() ........................................................ 9.4.3.1 The Close-on-exec Flag ...................................................................... 9.4.3.2 File Descriptor Duplication ................................................................ 9.4.3.3 Manipulation of File Status Flags and Access Modes .......................... 9.4.3.4 Nonblocking I/O for Pipes and FIFOs ............................................... 9.4.3.5 fcntl() Summary ........................................................................... 9.5 Example: Two-Way Pipes in gawk .................................................................. 9.6 Suggested Reading ............................................................................................ 9.7 Summary .......................................................................................................... Exercises ......................................................................................................................
319 320 321 326 328 329 331 332 333 336 337 341 342 344
Chapter 10 Signals .......................................................................................
347
10.1 Introduction ..................................................................................................... 10.2 Signal Actions ................................................................................................... 10.3 Standard C Signals: signal() and raise() ............................................... 10.3.1 The signal() Function ........................................................................ 10.3.2 Sending Signals Programmatically: raise() .......................................... 10.4 Signal Handlers in Action ................................................................................. 10.4.1 Traditional Systems .................................................................................. 10.4.2 BSD and GNU/Linux .............................................................................. 10.4.3 Ignoring Signals ....................................................................................... 10.4.4 Restartable System Calls ........................................................................... 10.4.4.1 Example: GNU Coreutils safe_read() and safe_write() ...... 10.4.4.2 GLIBC Only: TEMP_FAILURE_RETRY() ....................................... 10.4.5 Race Conditions and sig_atomic_t (ISO C) ...................................... 10.4.6 Additional Caveats ................................................................................... 10.4.7 Our Story So Far, Episode I ..................................................................... 10.5 The System V Release 3 Signal APIs: sigset() et al. .................................... 10.6 POSIX Signals .................................................................................................. 10.6.1 Uncovering the Problem .......................................................................... 10.6.2 Signal Sets: sigset_t and Related Functions ........................................
348 348 349 349 353 353 353 356 356 357 359 360 361 363 363 365 367 367 368
xii
Contents
10.6.3 Managing the Signal Mask: sigprocmask() et al. ............................... 10.6.4 Catching Signals: sigaction() ............................................................ 10.6.5 Retrieving Pending Signals: sigpending() .......................................... 10.6.6 Making Functions Interruptible: siginterrupt() ............................. 10.6.7 Sending Signals: kill() and killpg() ............................................... 10.6.8 Our Story So Far, Episode II .................................................................... 10.7 Signals for Interprocess Communication .......................................................... 10.8 Important Special-Purpose Signals ................................................................... 10.8.1 Alarm Clocks: sleep(), alarm(), and SIGALRM ............................... 10.8.1.1 Harder but with More Control: alarm() and SIGALRM ................. 10.8.1.2 Simple and Easy: sleep() ................................................................ 10.8.2 Job Control Signals .................................................................................. 10.8.3 Parental Supervision: Three Different Strategies ....................................... 10.8.3.1 Poor Parenting: Ignoring Children Completely .................................. 10.8.3.2 Permissive Parenting: Supervising Minimally .................................... 10.8.3.3 Strict Parental Control ........................................................................ 10.9 Signals Across fork() and exec() ............................................................... 10.10 Summary .......................................................................................................... Exercises ......................................................................................................................
369 370 375 376 376 378 379 382 382 382 383 383 385 385 386 393 398 399 401
Chapter 11 Permissions and User and Group ID Numbers ...........................
403
11.1 Checking Permissions ....................................................................................... 11.1.1 Real and Effective IDs .............................................................................. 11.1.2 Setuid and Setgid Bits .............................................................................. 11.2 Retrieving User and Group IDs ........................................................................ 11.3 Checking As the Real User: access() ........................................................... 11.4 Checking as the Effective User: euidaccess() (GLIBC) ............................. 11.5 Setting Extra Permission Bits for Directories .................................................... 11.5.1 Default Group for New Files and Directories ........................................... 11.5.2 Directories and the Sticky Bit ................................................................... 11.6 Setting Real and Effective IDs .......................................................................... 11.6.1 Changing the Group Set .......................................................................... 11.6.2 Changing the Real and Effective IDs ........................................................ 11.6.3 Using the Setuid and Setgid Bits .............................................................. 11.7 Working with All Three IDs: getresuid() and setresuid() (Linux) ....
404 405 406 407 410 412 412 412 414 415 416 416 419 421
Contents
xiii
11.8 Crossing a Security Minefield: Setuid root ..................................................... 11.9 Suggested Reading ............................................................................................ 11.10 Summary .......................................................................................................... Exercises ......................................................................................................................
422 423 424 426
Chapter 12 General Library Interfaces — Part 2 ............................................
427
12.1 Assertion Statements: assert() .................................................................... 12.2 Low-Level Memory: The memXXX() Functions .............................................. 12.2.1 Setting Memory: memset() ................................................................... 12.2.2 Copying Memory: memcpy(), memmove(), and memccpy() ............. 12.2.3 Comparing Memory Blocks: memcmp() ................................................. 12.2.4 Searching for a Byte Value: memchr() .................................................... 12.3 Temporary Files ............................................................................................... 12.3.1 Generating Temporary Filenames (Bad) ................................................... 12.3.2 Creating and Opening Temporary Files (Good) ....................................... 12.3.3 Using the TMPDIR Environment Variable ................................................ 12.4 Committing Suicide: abort() ....................................................................... 12.5 Nonlocal Gotos ................................................................................................ 12.5.1 Using Standard Functions: setjmp() and longjmp() ........................ 12.5.2 Handling Signal Masks: sigsetjmp() and siglongjmp() ............... 12.5.3 Observing Important Caveats ................................................................... 12.6 Pseudorandom Numbers .................................................................................. 12.6.1 Standard C: rand() and srand() ........................................................ 12.6.2 POSIX Functions: random() and srandom() ..................................... 12.6.3 The /dev/random and /dev/urandom Special Files .......................... 12.7 Metacharacter Expansions ................................................................................ 12.7.1 Simple Pattern Matching: fnmatch() ................................................... 12.7.2 Filename Expansion: glob() and globfree() ................................... 12.7.3 Shell Word Expansion: wordexp() and wordfree() ......................... 12.8 Regular Expressions .......................................................................................... 12.9 Suggested Reading ............................................................................................ 12.10 Summary .......................................................................................................... Exercises ......................................................................................................................
428 432 432 433 434 435 436 437 441 443 445 446 447 449 450 454 455 457 460 461 462 464 470 471 480 481 482
xiv
Contents
Chapter 13 Internationalization and Localization ..........................................
485
13.1 Introduction ..................................................................................................... 13.2 Locales and the C Library ................................................................................. 13.2.1 Locale Categories and Environment Variables .......................................... 13.2.2 Setting the Locale: setlocale() .......................................................... 13.2.3 String Collation: strcoll() and strxfrm() ..................................... 13.2.4 Low-Level Numeric and Monetary Formatting: localeconv() ........... 13.2.5 High-Level Numeric and Monetary Formatting: strfmon() and printf() ........................................................................................ 13.2.6 Example: Formatting Numeric Values in gawk ....................................... 13.2.7 Formatting Date and Time Values: ctime() and strftime() ........... 13.2.8 Other Locale Information: nl_langinfo() ......................................... 13.3 Dynamic Translation of Program Messages ...................................................... 13.3.1 Setting the Text Domain: textdomain() ............................................. 13.3.2 Translating Messages: gettext() ......................................................... 13.3.3 Working with Plurals: ngettext() ....................................................... 13.3.4 Making gettext() Easy to Use ............................................................ 13.3.4.1 Portable Programs: "gettext.h" ................................................... 13.3.4.2 GLIBC Only: ........................................................... 13.3.5 Rearranging Word Order with printf() .............................................. 13.3.6 Testing Translations in a Private Directory .............................................. 13.3.7 Preparing Internationalized Programs ....................................................... 13.3.8 Creating Translations ............................................................................... 13.4 Can You Spell That for Me, Please? .................................................................. 13.4.1 Wide Characters ....................................................................................... 13.4.2 Multibyte Character Encodings ................................................................ 13.4.3 Languages ................................................................................................. 13.4.4 Conclusion ............................................................................................... 13.5 Suggested Reading ............................................................................................ 13.6 Summary .......................................................................................................... Exercises ......................................................................................................................
486 487 487 489 490 494 498 501 503 504 507 507 508 509 510 511 513 514 515 516 517 521 523 523 524 525 526 526 527
Chapter 14 Extended Interfaces ...................................................................
529
14.1 Allocating Aligned Memory: posix_memalign() and memalign() ......... 14.2 Locking Files ....................................................................................................
530 531
Contents
14.2.1 File Locking Concepts .............................................................................. 14.2.2 POSIX Locking: fcntl() and lockf() .............................................. 14.2.2.1 Describing a Lock ............................................................................... 14.2.2.2 Obtaining and Releasing Locks ........................................................... 14.2.2.3 Observing Locking Caveats ................................................................ 14.2.3 BSD Locking: flock() .......................................................................... 14.2.4 Mandatory Locking .................................................................................. 14.3 More Precise Times .......................................................................................... 14.3.1 Microsecond Times: gettimeofday() ................................................ 14.3.2 Microsecond File Times: utimes() ....................................................... 14.3.3 Interval Timers: setitimer() and getitimer() ............................. 14.3.4 More Exact Pauses: nanosleep() ......................................................... 14.4 Advanced Searching with Binary Trees ............................................................. 14.4.1 Introduction to Binary Trees .................................................................... 14.4.2 Tree Management Functions .................................................................... 14.4.3 Tree Insertion: tsearch() .................................................................... 14.4.4 Tree Lookup and Use of A Returned Pointer: tfind() and tsearch() ............................................................................................ 14.4.5 Tree Traversal: twalk() ......................................................................... 14.4.6 Tree Node Removal and Tree Deletion: tdelete() and tdestroy() . 14.5 Summary .......................................................................................................... Exercises ......................................................................................................................
xv
531 533 533 536 538 539 540 543 544 545 546 550 551 551 554 554 555 557 561 562 563
PART III Debugging and Final Project ........................ 565 Chapter 15 Debugging .................................................................................
567
15.1 First Things First .............................................................................................. 15.2 Compilation for Debugging ............................................................................. 15.3 GDB Basics ...................................................................................................... 15.3.1 Running GDB ......................................................................................... 15.3.2 Setting Breakpoints, Single-Stepping, and Setting Watchpoints ............... 15.4 Programming for Debugging ............................................................................ 15.4.1 Compile-Time Debugging Code .............................................................. 15.4.1.1 Use Debugging Macros ...................................................................... 15.4.1.2 Avoid Expression Macros If Possible ................................................... 15.4.1.3 Reorder Code If Necessary .................................................................
568 569 570 571 574 577 577 577 580 582
xvi
Contents
15.4.1.4 Use Debugging Helper Functions ....................................................... 15.4.1.5 Avoid Unions When Possible ............................................................. 15.4.2 Runtime Debugging Code ....................................................................... 15.4.2.1 Add Debugging Options and Variables .............................................. 15.4.2.2 Use Special Environment Variables .................................................... 15.4.2.3 Add Logging Code ............................................................................. 15.4.2.4 Runtime Debugging Files ................................................................... 15.4.2.5 Add Special Hooks for Breakpoints .................................................... 15.5 Debugging Tools .............................................................................................. 15.5.1 The dbug Library — A Sophisticated printf() ................................... 15.5.2 Memory Allocation Debuggers ................................................................. 15.5.2.1 GNU/Linux mtrace ......................................................................... 15.5.2.2 Electric Fence ..................................................................................... 15.5.2.3 Debugging Malloc: dmalloc ............................................................ 15.5.2.4 Valgrind: A Versatile Tool .................................................................. 15.5.2.5 Other Malloc Debuggers .................................................................... 15.5.3 A Modern lint ...................................................................................... 15.6 Software Testing ............................................................................................... 15.7 Debugging Rules .............................................................................................. 15.8 Suggested Reading ............................................................................................ 15.9 Summary .......................................................................................................... Exercises ......................................................................................................................
584 591 595 595 597 601 602 603 605 606 612 613 614 619 623 629 631 632 633 637 638 639
Chapter 16 A Project That Ties Everything Together ....................................
641
16.1 Project Description ........................................................................................... 16.2 Suggested Reading ............................................................................................
642 644
PART IV Appendixes ................................................. 647 Appendix A Teach Yourself Programming in Ten Years .................................
649
Appendix B Caldera Ancient UNIX License ....................................................
655
Appendix C GNU General Public License .......................................................
657
Index ..............................................................................................................
667
Preface
ne of the best ways to learn about programming is to read well-written programs. This book teaches the fundamental Linux system call APIs—those that form the core of any significant program—by presenting code from production programs that you use every day.
O
By looking at concrete programs, you can not only see how to use the Linux APIs, but you also can examine the real-world issues (performance, portability, robustness) that arise in writing software. While the book’s title is Linux Programming by Example, everything we cover, unless otherwise noted, applies to modern Unix systems as well. In general we use “Linux” to mean the Linux kernel, and “GNU/Linux” to mean the total system (kernel, libraries, tools). Also, we often say “Linux” when we mean all of Linux, GNU/Linux and Unix; if something is specific to one system or the other, we mention it explicitly.
Audience This book is intended for the person who understands programming and is familiar with the basics of C, at least on the level of The C Programming Language by Kernighan and Ritchie. (Java programmers wishing to read this book should understand C pointers, since C code makes heavy use of them.) The examples use both the 1990 version of Standard C and Original C. In particular, you should be familiar with all C operators, control-flow structures, variable and pointer declarations and use, the string management functions, the use of exit(), and the suite of functions for file input/output. You should understand the basic concepts of standard input, standard output, and standard error and the fact that all C programs receive an array of character strings representing invocation options and arguments. You should also be familiar with the fundamental command-line tools, such as cd, cp, date, ln, ls, man (and info if you
xvii
xviii
Preface
have it), rmdir, and rm, the use of long and short command-line options, environment variables, and I/O redirection, including pipes. We assume that you want to write programs that work not just under GNU/Linux but across the range of Unix systems. To that end, we mark each interface as to its availability (GLIBC systems only, or defined by POSIX, and so on), and portability advice is included as an integral part of the text. The programming taught here may be at a lower level than you’re used to; that’s OK. The system calls are the fundamental building blocks for higher operations and are thus low-level by nature. This in turn dictates our use of C: The APIs were designed for use from C, and code that interfaces them to higher-level languages, such as C++ and Java, will necessarily be lower level in nature, and most likely, written in C. It may help to remember that “low level” doesn’t mean “bad,” it just means “more challenging.”
What You Will Learn This book focuses on the basic APIs that form the core of Linux programming: • Memory management • File input/output • File metadata • Processes and signals • Users and groups • Programming support (sorting, argument parsing, and so on) • Internationalization • Debugging We have purposely kept the list of topics short. We believe that it is intimidating to try to learn “all there is to know” from a single book. Most readers prefer smaller, more focused books, and the best Unix books are all written that way. So, instead of a single giant tome, we plan several volumes: one on Interprocess Communication (IPC) and networking, and another on software development and code portability. We also have an eye toward possible additional volumes in a Linux
Preface
xix
Programming by Example series that will cover topics such as thread programming and GUI programming. The APIs we cover include both system calls and library functions. Indeed, at the C level, both appear as simple function calls. A system call is a direct request for system services, such as reading or writing a file or creating a process. A library function, on the other hand, runs at the user level, possibly never requesting any services from the operating system. System calls are documented in section 2 of the reference manual (viewable online with the man command), and library functions are documented in section 3. Our goal is to teach you the use of the Linux APIs by example: in particular, through the use, wherever possible, of both original Unix source code and the GNU utilities. Unfortunately, there aren’t as many self-contained examples as we thought there’d be. Thus, we have written numerous small demonstration programs as well. We stress programming principles: especially those aspects of GNU programming, such as “no arbitrary limits,” that make the GNU utilities into exceptional programs. The choice of everyday programs to study is deliberate. If you’ve been using GNU/Linux for any length of time, you already understand what programs such as ls and cp do; it then becomes easy to dive straight into how the programs work, without having to spend a lot of time learning what they do. Occasionally, we present both higher-level and lower-level ways of doing things. Usually the higher-level standard interface is implemented in terms of the lower-level interface or construct. We hope that such views of what’s “under the hood” will help you understand how things work; for all the code you write, you should always use the higher-level, standard interface. Similarly, we sometimes introduce functions that provide certain functionality and then recommend (with a provided reason) that these functions be avoided! The primary reason for this approach is so that you’ll be able to recognize these functions when you see them and thus understand the code using them. A well-rounded knowledge of a topic requires understanding not just what you can do, but what you should and should not do. Finally, each chapter concludes with exercises. Some involve modifying or writing code. Others are more in the category of “thought experiments” or “why do you think …” We recommend that you do all of them—they will help cement your understanding of the material.
Preface
xx
Small Is Beautiful: Unix Programs Hoare’s law: “Inside every large program is a small program struggling to get out.” —C.A.R. Hoare—
Initially, we planned to teach the Linux API by using the code from the GNU utilities. However, the modern versions of even simple command-line programs (like mv and cp) are large and many-featured. This is particularly true of the GNU variants of the standard utilities, which allow long and short options, do everything required by POSIX, and often have additional, seemingly unrelated options as well (like output highlighting). It then becomes reasonable to ask, “Given such a large and confusing forest, how can we focus on the one or two important trees?” In other words, if we present the current full-featured program, will it be possible to see the underlying core operation of the program? That is when Hoare’s law1 inspired us to look to the original Unix programs for example code. The original V7 Unix utilities are small and straightforward, making it easy to see what’s going on and to understand how the system calls are used. (V7 was released around 1979; it is the common ancestor of all modern Unix systems, including GNU/Linux and the BSD systems.) For many years, Unix source code was protected by copyrights and trade secret license agreements, making it difficult to use for study and impossible to publish. This is still true of all commercial Unix source code. However, in 2002, Caldera (currently operating as SCO) made the original Unix code (through V7 and 32V Unix) available under an Open Source style license (see Appendix B, “Caldera Ancient UNIX License,” page 655). This makes it possible for us to include the code from the early Unix system in this book.
Standards Throughout the book we refer to several different formal standards. A standard is a document describing how something works. Formal standards exist for many things, for example, the shape, placement, and meaning of the holes in the electrical outlet in
1 This famous statement was made at The International Workshop on Efficient Production of Large Programs in
Jablonna, Poland, August 10–14, 1970.
Preface
xxi
your wall are defined by a formal standard so that all the power cords in your country work in all the outlets. So, too, formal standards for computing systems define how they are supposed to work; this enables developers and users to know what to expect from their software and enables them to complain to their vendor when software doesn’t work. Of interest to us here are: 1. ISO/IEC International Standard 9899: Programming Languages — C, 1990. The first formal standard for the C programming language. 2. ISO/IEC International Standard 9899: Programming Languages — C, Second edition, 1999. The second (and current) formal standard for the C programming language. 3. ISO/IEC International Standard 14882: Programming Languages — C++, 1998. The first formal standard for the C++ programming language. 4. ISO/IEC International Standard 14882: Programming Languages — C++, 2003. The second (and current) formal standard for the C++ programming language. 5. IEEE Standard 1003.1–2001: Standard for Information Technology — Portable Operating System Interface (POSIX®). The current version of the POSIX standard; describes the behavior expected of Unix and Unix-like systems. This edition covers both the system call and library interface, as seen by the C/C++ programmer, and the shell and utilities interface, seen by the user. It consists of several volumes: • Base Definitions. The definitions of terms, facilities, and header files. • Base Definitions — Rationale. Explanations and rationale for the choice of
facilities that both are and are not included in the standard. • System Interfaces. The system calls and library functions. POSIX terms them
all “functions.” • Shell and Utilities. The shell language and utilities available for use with shell
programs and interactively. Although language standards aren’t exciting reading, you may wish to consider purchasing a copy of the C standard: It provides the final definition of the language. Copies
xxii
Preface
can be purchased from ANSI2 and from ISO.3 (The PDF version of the C standard is quite affordable.) The POSIX standard can be ordered from The Open Group.4 By working through their publications catalog to the items listed under “CAE Specifications,” you can find individual pages for each part of the standard (named “C031” through “C034”). Each one’s page provides free access to the online HTML version of the particular volume. The POSIX standard is intended for implementation on both Unix and Unix-like systems, as well as non-Unix systems. Thus, the base functionality it provides is a subset of what Unix systems have. However, the POSIX standard also defines optional extensions—additional functionality, for example, for threads or real-time support. Of most importance to us is the X/Open System Interface (XSI) extension, which describes facilities from historical Unix systems. Throughout the book, we mark each API as to its availability: ISO C, POSIX, XSI, GLIBC only, or nonstandard but commonly available.
Features and Power: GNU Programs Restricting ourselves to just the original Unix code would have made an interesting history book, but it would not have been very useful in the 21st century. Modern programs do not have the same constraints (memory, CPU power, disk space, and speed) that the early Unix systems did. Furthermore, they need to operate in a multilingual world—ASCII and American English aren’t enough. More importantly, one of the primary freedoms expressly promoted by the Free Software Foundation and the GNU Project5 is the “freedom to study.” GNU programs are intended to provide a large corpus of well-written programs that journeyman programmers can use as a source from which to learn.
2 http://www.ansi.org 3 http://www.iso.ch 4 http://www.opengroup.org 5 http://www.gnu.org
Preface
xxiii
By using GNU programs, we want to meet both goals: show you well-written, modern code from which you will learn how to write good code and how to use the APIs well. We believe that GNU software is better because it is free (in the sense of “freedom,” not “free beer”). But it’s also recognized that GNU software is often technically better than the corresponding Unix counterparts, and we devote space in Section 1.4, “Why GNU Programs Are Better,” page 14, to explaining why. A number of the GNU code examples come from gawk (GNU awk). The main reason is that it’s a program with which we’re very familiar, and therefore it was easy to pick examples from it. We don’t otherwise make any special claims about it.
Summary of Chapters Driving a car is a holistic process that involves multiple simultaneous tasks. In many ways, Linux programming is similar, requiring understanding of multiple aspects of the API, such as file I/O, file metadata, directories, storage of time information, and so on. The first part of the book looks at enough of these individual items to enable studying the first significant program, the V7 ls. Then we complete the discussion of files and users by looking at file hierarchies and the way filesystems work and are used. Chapter 1, “Introduction,” page 3, describes the Unix and Linux file and process models, looks at the differences between Original C and 1990 Standard C, and provides an overview of the principles that make GNU programs generally better than standard Unix programs. Chapter 2, “Arguments, Options, and the Environment,” page 23, describes how a C program accesses and processes command-line arguments and options and explains how to work with the environment. Chapter 3, “User-Level Memory Management,” page 51, provides an overview of the different kinds of memory in use and available in a running process. User-level memory management is central to every nontrivial application, so it’s important to understand it early on.
xxiv
Preface
Chapter 4, “Files and File I/O,” page 83, discusses basic file I/O, showing how to create and use files. This understanding is important for everything else that follows. Chapter 5, “Directories and File Metadata,” page 117, describes how directories, hard links, and symbolic links work. It then describes file metadata, such as owners, permissions, and so on, as well as covering how to work with directories. Chapter 6, “General Library Interfaces — Part 1,” page 165, looks at the first set of general programming interfaces that we need so that we can make effective use of a file’s metadata. Chapter 7, “Putting It All Together: ls,” page 207, ties together everything seen so far by looking at the V7 ls program. Chapter 8, “Filesystems and Directory Walks,” page 227, describes how filesystems are mounted and unmounted and how a program can tell what is mounted on the system. It also describes how a program can easily “walk” an entire file hierarchy, taking appropriate action for each object it encounters. The second part of the book deals with process creation and management, interprocess communication with pipes and signals, user and group IDs, and additional general programming interfaces. Next, the book first describes internationalization with GNU gettext and then several advanced APIs. Chapter 9, “Process Management and Pipes,” page 283, looks at process creation, program execution, IPC with pipes, and file descriptor management, including nonblocking I/O. Chapter 10, “Signals,” page 347, discusses signals, a simplistic form of interprocess communication. Signals also play an important role in a parent process’s management of its children. Chapter 11, “Permissions and User and Group ID Numbers,” page 403, looks at how processes and files are identified, how permission checking works, and how the setuid and setgid mechanisms work.
Preface
xxv
Chapter 12, “General Library Interfaces — Part 2,” page 427, looks at the rest of the general APIs; many of these are more specialized than the first general set of APIs. Chapter 13, “Internationalization and Localization,” page 485, explains how to enable your programs to work in multiple languages, with almost no pain. Chapter 14, “Extended Interfaces,” page 529, describes several extended versions of interfaces covered in previous chapters, as well as covering file locking in full detail. We round the book off with a chapter on debugging, since (almost) no one gets things right the first time, and we suggest a final project to cement your knowledge of the APIs covered in this book. Chapter 15, “Debugging,” page 567, describes the basics of the GDB debugger, transmits as much of our programming experience in this area as possible, and looks at several useful tools for doing different kinds of debugging. Chapter 16, “A Project That Ties Everything Together,” page 641, presents a significant programming project that makes use of just about everything covered in the book. Several appendices cover topics of interest, including the licenses for the source code used in this book. Appendix A, “Teach Yourself Programming in Ten Years,” page 649, invokes the famous saying, “Rome wasn’t built in a day.” So too, Linux/Unix expertise and understanding only come with time and practice. To that end, we have included this essay by Peter Norvig which we highly recommend. Appendix B, “Caldera Ancient UNIX License,” page 655, covers the Unix source code used in this book. Appendix C, “GNU General Public License,” page 657, covers the GNU source code used in this book.
Preface
xxvi
Typographical Conventions Like all books on computer-related topics, we use certain typographical conventions to convey information. Definitions or first uses of terms appear in italics, like the word “Definitions” at the beginning of this sentence. Italics are also used for emphasis, for citations of other works, and for commentary in examples. Variable items such as arguments or filenames, appear like this. Occasionally, we use a bold font when a point needs to be made strongly. Things that exist on a computer are in a constant-width font, such as filenames (foo.c) and command names (ls, grep). Short snippets that you type are additionally enclosed in single quotes: ‘ls -l *.c’. $ and > are the Bourne shell primary and secondary prompts and are used to display interactive examples. User input appears in a different font from regular computer output in examples. Examples look like this: $ ls -1 foo bar baz
Look at files. Option is digit 1, not letter l
We prefer the Bourne shell and its variants (ksh93, Bash) over the C shell; thus, all our examples show only the Bourne shell. Be aware that quoting and line-continuation rules are different in the C shell; if you use it, you’re on your own!6 When referring to functions in programs, we append an empty pair of parentheses to the function’s name: printf(), strcpy(). When referring to a manual page (accessible with the man command), we follow the standard Unix convention of writing the command or function name in italics and the section in parentheses after it, in regular type: awk(1), printf (3).
Where to Get Unix and GNU Source Code You may wish to have copies of the programs we use in this book for your own experimentation and review. All the source code is available over the Internet, and your GNU/Linux distribution contains the source code for the GNU utilities.
6 See the csh(1) and tcsh(1) manpages and the book Using csh & tcsh, by Paul DuBois, O’Reilly & Associates, Se-
bastopol, CA, USA, 1995. ISBN: 1-56592-132-1.
Preface
xxvii
Unix Code Archives of various “ancient” versions of Unix are maintained by The UNIX Heritage Society (TUHS), http://www.tuhs.org. Of most interest is that it is possible to browse the archive of old Unix source code on the Web. Start with http://minnie.tuhs.org/UnixTree/. All the example code in this book is from the Seventh Edition Research UNIX System, also known as “V7.” The TUHS site is physically located in Australia, although there are mirrors of the archive around the world—see http://www.tuhs.org/archive_sites.html. This page also indicates that the archive is available for mirroring with rsync. (See http://rsync.samba.org/ if you don’t have rsync: It’s standard on GNU/Linux systems.) You will need about 2–3 gigabytes of disk to copy the entire archive. To copy the archive, create an empty directory, and in it, run the following commands: mkdir Applications 4BSD PDP-11 PDP-11/Trees VAX Other rsync rsync rsync rsync rsync rsync rsync
-avz -avz -avz -avz -avz -avz -avz
minnie.tuhs.org::UA_Root . minnie.tuhs.org::UA_Applications Applications minnie.tuhs.org::UA_4BSD 4BSD minnie.tuhs.org::UA_PDP11 PDP-11 minnie.tuhs.org::UA_PDP11_Trees PDP-11/Trees minnie.tuhs.org::UA_VAX VAX minnie.tuhs.org::UA_Other Other
You may wish to omit copying the Trees directory, which contains extractions of several versions of Unix, and occupies around 700 megabytes of disk. You may also wish to consult the TUHS mailing list to see if anyone near you can provide copies of the archive on CD-ROM, to avoid transferring so much data over the Internet. The folks at Southern Storm Software, Pty. Ltd., in Australia, have “modernized” a portion of the V7 user-level code so that it can be compiled and run on current systems, most notably GNU/Linux. This code can be downloaded from their web site.7 It’s interesting to note that V7 code does not contain any copyright or permission notices in it. The authors wrote the code primarily for themselves and their research, leaving the permission issues to AT&T’s corporate licensing department.
7 http://www.southern-storm.com.au/v7upgrade.html
Preface
xxviii
GNU Code If you’re using GNU/Linux, then your distribution will have come with source code, presumably in whatever packaging format it uses (Red Hat RPM files, Debian DEB files, Slackware .tar.gz files, etc.). Many of the examples in the book are from the GNU Coreutils, version 5.0. Find the appropriate CD-ROM for your GNU/Linux distribution, and use the appropriate tool to extract the code. Or follow the instructions in the next few paragraphs to retrieve the code. If you prefer to retrieve the files yourself from the GNU ftp site, you will find them at ftp://ftp.gnu.org/gnu/coreutils/coreutils-5.0.tar.gz. You can use the wget utility to retrieve the file: $ wget ftp://ftp.gnu.org/gnu/coreutils/coreutils-5.0.tar.gz
Retrieve the distribution
… lots of output here as file is retrieved …
Alternatively, you can use good old-fashioned ftp to retrieve the file: $ ftp ftp.gnu.org Connect to GNU ftp site Connected to ftp.gnu.org (199.232.41.7). 220 GNU FTP server ready. Name (ftp.gnu.org:arnold): anonymous Use anonymous ftp 331 Please specify the password. Password: Password does not echo on screen 230-If you have any problems with the GNU software or its downloading, 230-please refer your questions to . ... Lots of verbiage deleted 230 Login successful. Have fun. Remote system type is UNIX. Using binary mode to transfer files. ftp> cd /gnu/coreutils Change to Coreutils directory 250 Directory successfully changed. ftp> bin 200 Switching to Binary mode. ftp> hash Print # signs as progress indicators Hash mark printing on (1024 bytes/hash mark). ftp> get coreutils-5.0.tar.gz Retrieve file local: coreutils-5.0.tar.gz remote: coreutils-5.0.tar.gz 227 Entering Passive Mode (199,232,41,7,86,107) 150 Opening BINARY mode data connection for coreutils-5.0.tar.gz (6020616 bytes) ################################################################################# ################################################################################# ... 226 File send OK. 6020616 bytes received in 2.03e+03 secs (2.9 Kbytes/sec) ftp> quit Log off 221 Goodbye.
Preface
xxix
Once you have the file, extract it as follows: $ gzip -dc < coreutils-5.0.tar.gz | tar -xvpf -
Extract files
. . . lots of output here as files are extracted . . .
Systems using GNU tar may use this incantation: $ tar -xvpzf coreutils-5.0.tar.gz
Extract files
. . . lots of output here as files are extracted . . .
In compliance with the GNU General Public License, here is the Copyright information for all GNU programs quoted in this book. All the programs are “free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version.” See Appendix C, “GNU General Public License,” page 657, for the text of the GNU General Public License. Coreutils 5.0 File
Copyright dates
lib/safe-read.c
Copyright © 1993–1994, 1998, 2002
lib/safe-write.c
Copyright © 2002
lib/utime.c
Copyright © 1998, 2001–2002
lib/xreadlink.c
Copyright © 2001
src/du.c
Copyright © 1988–1991, 1995–2003
src/env.c
Copyright © 1986, 1991–2003
src/install.c
Copyright © 1989–1991, 1995–2002
src/link.c
Copyright © 2001–2002
src/ls.c
Copyright © 1985, 1988, 1990, 1991, 1995–2003
src/pathchk.c
Copyright © 1991–2003
src/sort.c
Copyright © 1988, 1991–2002
src/sys2.h
Copyright © 1997–2003
src/wc.c
Copyright © 1985, 1991, 1995–2002
Gawk 3.0.6 File
Copyright dates
eval.c
Copyright © 1986, 1988, 1989, 1991–2000
Preface
xxx
Gawk 3.1.3 File
Copyright dates
awk.h
Copyright © 1986, 1988, 1989, 1991–2003
builtin.c
Copyright © 1986, 1988, 1989, 1991–2003
eval.c
Copyright © 1986, 1988, 1989, 1991–2003
io.c
Copyright © 1986, 1988, 1989, 1991–2003
main.c
Copyright © 1986, 1988, 1989, 1991–2003
posix/gawkmisc.c
Copyright © 1986, 1988, 1989, 1991–1998, 2001–2003
Gawk 3.1.4 File
Copyright dates
builtin.c
Copyright © 1986, 1988, 1989, 1991–2004
GLIBC 2.3.2 File
Copyright dates
locale/locale.h
Copyright © 1991, 1992, 1995–2002
posix/unistd.h
Copyright © 1991–2003
time/sys/time.h
Copyright © 1991–1994, 1996–2003
Make 3.80 File
Copyright dates
read.c
Copyright © 1988–1997, 2002
Where to Get the Example Programs Used in This Book The example programs used in this book can be found at http://authors. phptr.com/robbins.
About the Cover “This is the weapon of a Jedi Knight …, an elegant weapon for a more civilized age. For over a thousand generations the Jedi Knights were the guardians of peace and justice in the Old Republic. Before the dark times, before the Empire.” —Obi-Wan Kenobi—
You may be wondering why we chose to put a light saber on the cover and to use it throughout the book’s interior. What does it represent, and how does it relate to Linux programming?
Preface
xxxi
In the hands of a Jedi Knight, a light saber is both a powerful weapon and a thing of beauty. Its use demonstrates the power, knowledge, control of the Force, and arduous training of the Jedi who wields it. The elegance of the light saber mirrors the elegance of the original Unix API design. There, too, the studied, precise use of the APIs and the Software Tools and GNU design principles lead to today’s powerful, flexible, capable GNU/Linux system. This system demonstrates the knowledge and understanding of the programmers who wrote all its components. And, of course, light sabers are just way cool!
Acknowledgments Writing a book is lots of work, and doing it well requires help from many people. Dr. Brian W. Kernighan, Dr. Doug McIlroy, Peter Memishian, and Peter van der Linden reviewed the initial book proposal. David J. Agans, Fred Fish, Don Marti, Jim Meyering, Peter Norvig, and Julian Seward provided reprint permission for various items quoted throughout the book. Thanks to Geoff Collyer, Ulrich Drepper, Yosef Gold, Dr. C.A.R. (Tony) Hoare, Dr. Manny Lehman, Jim Meyering, Dr. Dennis M. Ritchie, Julian Seward, Henry Spencer, and Dr. Wladyslaw M. Turski, who provided much useful general information. Thanks also to the other members of the GNITS gang: Karl Berry, Akim DeMaille, Ulrich Drepper, Greg McGary, Jim Meyering, François Pinard, and Tom Tromey, who all provided helpful feedback about good programming practice. Karl Berry, Alper Ersoy, and Dr. Nelson H.F. Beebe provided valuable technical help with the Texinfo and DocBook/XML toolchains. Good technical reviewers not only make sure that an author gets his facts right, they also ensure that he thinks carefully about his presentation. Dr. Nelson H.F. Beebe, Geoff Collyer, Russ Cox, Ulrich Drepper, Randy Lechlitner, Dr. Brian W. Kernighan, Peter Memishian, Jim Meyering, Chet Ramey, and Louis Taber acted as technical reviewers for the entire book. Dr. Michael Brennan provided helpful comments on Chapter 15. Both the prose and many of the example programs benefited from their reviews. I hereby thank all of them. As most authors usually say here, “Any remaining errors are mine.” I would especially like to thank Mark Taub of Pearson Education for initiating this project, for his enthusiasm for the series, and for his help and advice as the book moved
xxxii
Preface
through its various stages. Anthony Gemmellaro did a phenomenal job of realizing my concept for the cover, and Gail Cocker’s interior design is beautiful. Faye Gemmellaro made the production process enjoyable, instead of a chore. Dmitry Kirsanov and Alina Kirsanova did the figures, page layout, and indexing; they were a pleasure to work with. Finally, my deepest gratitude and love to my wife, Miriam, for her support and encouragement during the book’s writing. Arnold Robbins Nof Ayalon ISRAEL
I Files and Users
page 3
Chapter 1
Introduction
Chapter 2
Arguments, Options, and the Environment
page 23
Chapter 3
User-Level Memory Management
page 51
Chapter 4
Files and File I/O
page 83
Chapter 5
Directories and File Metadata
page 117
Chapter 6
General Library Interfaces — Part 1
page 165
Chapter 7
Putting It All Together: ls
page 207
Chapter 8
Filesystems and Directory Walks
page 227
1
This page intentionally left blank
1 Introduction
In this chapter 1.1 The Linux/Unix File Model
page 4
1.2 The Linux/Unix Process Model
page 10
1.3 Standard C vs. Original C
page 12
1.4 Why GNU Programs Are Better
page 14
1.5 Portability Revisited
page 19
1.6 Suggested Reading
page 20
1.7 Summary
page 21
Exercises
page 22
3
f there is one phrase that summarizes the primary GNU/Linux (and therefore Unix) concepts, it’s “files and processes.” In this chapter we review the Linux file and process models. These are important to understand because the system calls are almost all concerned with modifying some attribute or part of the state of a file or a process.
I
Next, because we’ll be examining code in both styles, we briefly review the major difference between 1990 Standard C and Original C. Finally, we discuss at some length what makes GNU programs “better,” programming principles that we’ll see in use in the code. This chapter contains a number of intentional simplifications. The full details are covered as we progress through the book. If you’re already a Linux wizard, please forgive us.
1.1 The Linux/Unix File Model One of the driving goals in the original Unix design was simplicity. Simple concepts are easy to learn and use. When the concepts are translated into simple APIs, simple programs are then easy to design, write, and get correct. In addition, simple code is often smaller and more efficient than more complicated designs. The quest for simplicity was driven by two factors. From a technical point of view, the original PDP-11 minicomputers on which Unix was developed had a small address space: 64 Kilobytes total on the smaller systems, 64K code and 64K of data on the large ones. These restrictions applied not just to regular programs (so-called user level code), but to the operating system itself (kernel level code). Thus, not only “Small Is Beautiful” aesthetically, but “Small Is Beautiful” because there was no other choice! The second factor was a negative reaction to contemporary commercial operating systems, which were needlessly complicated, with obtuse command languages, multiple kinds of file I/O, and little generality or symmetry. (Steve Johnson once remarked that “Using TSO is like trying to kick a dead whale down a beach.” TSO is one of the obtuse mainframe time-sharing systems just described.)
1.1.1 Files and Permissions The Unix file model is as simple as it gets: A file is a linear stream of bytes. Period. The operating system imposes no preordained structure on files: no fixed or varying 4
1.1 The Linux/Unix File Model
5
record sizes, no indexed files, nothing. The interpretation of file contents is entirely up to the application. (This isn’t quite true, as we’ll see shortly, but it’s close enough for a start.) Once you have a file, you can do three things with the file’s data: read them, write them, or execute them. Unix was designed for time-sharing minicomputers; this implies a multiuser environment from the get-go. Once there are multiple users, it must be possible to specify a file’s permissions: Perhaps user jane is user fred’s boss, and jane doesn’t want fred to read the latest performance evaluations. For file permission purposes, users are classified into three distinct categories: user: the owner of a file; group: the group of users associated with this file (discussed shortly); and other: anybody else. For each of these categories, every file has separate read, write, and execute permission bits associated with it, yielding a total of nine permission bits. This shows up in the first field of the output of ‘ls -l’: $ ls -l progex.texi -rw-r--r-1 arnold
devel
5614 Feb 24 18:02 progex.texi
Here, arnold and devel are the owner and group of progex.texi, and -rw-r--r-are the file type and permissions. The first character is a dash for regular file, a d for directories, or one of a small set of other characters for other kinds of files that aren’t important at the moment. Each subsequent group of three characters represents read, write, and execute permission for the owner, group, and “other,” respectively. In this example, progex.texi is readable and writable by the owner, and readable by the group and other. The dashes indicate absent permissions, thus the file is not executable by anyone, nor is it writable by the group or other. The owner and group of a file are stored as numeric values known as the user ID (UID) and group ID (GID); standard library functions that we present later in the book make it possible to print the values as human-readable names. A file’s owner can change the permission by using the chmod (change mode) command. (As such, file permissions are sometimes referred to as the “file mode.”) A file’s group can be changed with the chgrp (change group) and chown (change owner) commands.1 1 Some systems allow regular users to change the ownership on their files to someone else, thus “giving them away.”
The details are standardized by POSIX but are a bit messy. Typical GNU/Linux configurations do not allow it.
6
Chapter 1 • Introduction
Group permissions were intended to support cooperative work: Although one person in a group or department may own a particular file, perhaps everyone in that group needs to be able to modify it. (Consider a collaborative marketing paper or data from a survey.) When the system goes to check a file access (usually upon opening a file), if the UID of the process matches that of the file, the owner permissions apply. If those permissions deny the operation (say, a write to a file with -r--rw-rw- permissions), the operation fails; Unix and Linux do not proceed to test the group and other permissions.2 The same is true if the UID is different but the GID matches; if the group permissions deny the operation, it fails. Unix and Linux support the notion of a superuser: a user with special privileges. This user is known as root and has the UID of 0. root is allowed to do anything; all bets are off, all doors are open, all drawers unlocked.3 (This can have significant security implications, which we touch on throughout the book but do not cover exhaustively.) Thus, even if a file is mode ----------, root can still read and write the file. (One exception is that the file can’t be executed. But as root can add execute permission, the restriction doesn’t prevent anything.) The user/group/other, read/write/execute permissions model is simple, yet flexible enough to cover most situations. Other, more powerful but more complicated, models exist and are implemented on different systems, but none of them are well enough standardized and broadly enough implemented to be worth discussing in a generalpurpose text like this one.
1.1.2 Directories and Filenames Once you have a file, you need someplace to keep it. This is the purpose of the directory (known as a “folder” on Windows and Apple Macintosh systems). A directory is a special kind of file, which associates filenames with particular collections of file metadata, known as inodes. Directories are special because they can only be updated by the operating system, by the system calls described in Chapter 4, “Files and File I/O,”
2 The owner can always change the permission, of course. Most users don’t disable write permission for themselves. 3 There are some rare exceptions to this rule, all of which are beyond the scope of this book.
1.1 The Linux/Unix File Model
7
page 83. They are also special in that the operating system dictates the format of directory entries. Filenames may contain any valid 8-bit byte except the / (forward slash) character and ASCII NUL, the character whose bits are all zero. Early Unix systems limited filenames to 14 bytes; modern systems allow individual filenames to be up to 255 bytes. The inode contains all the information about a file except its name: the type, owner, group, permissions, size, modification and access times. It also stores the locations on disk of the blocks containing the file’s data. All of these are data about the file, not the file’s data itself, thus the term metadata. Directory permissions have a slightly different meaning from those for file permissions. Read permission means the ability to search the directory; that is, to look through it to see what files it contains. Write permission is the ability to create and remove files in the directory. Execute permission is the ability to go through a directory when opening or otherwise accessing a contained file or subdirectory. NOTE If you have write permission on a directory, you can remove files in that directory, even if they don’t belong to you! When used interactively, the rm command notices this, and asks you for confirmation in such a case. The /tmp directory has write permission for everyone, but your files in /tmp are quite safe because /tmp usually has the so-called sticky bit set on it: $ ls -ld /tmp drwxrwxrwt 11 root
root
4096 May 15 17:11 /tmp
Note the t is the last position of the first field. On most directories this position has an x in it. With the sticky bit set, only you, as the file’s owner, or root may remove your files. (We discuss this in more detail in Section 11.5.2, “Directories and the Sticky Bit,” page 414.)
1.1.3 Executable Files Remember we said that the operating system doesn’t impose a structure on files? Well, we’ve already seen that that was a white lie when it comes to directories. It’s also the case for binary executable files. To run a program, the kernel has to know what part of a file represents instructions (code) and what part represents data. This leads to the notion of an object file format, which is the definition for how these things are laid out within a file on disk.
Chapter 1 • Introduction
8
Although the kernel will only run a file laid out in the proper format, it is up to userlevel utilities to create these files. The compiler for a programming language (such as Ada, Fortran, C, or C++) creates object files, and then a linker or loader (usually named ld) binds the object files with library routines to create the final executable. Note that even if a file has all the right bits in all the right places, the kernel won’t run it if the appropriate execute permission bit isn’t turned on (or at least one execute bit for root). Because the compiler, assembler, and loader are user-level tools, it’s (relatively) easy to change object file formats as needs develop over time; it’s only necessary to “teach” the kernel about the new format and then it can be used. The part that loads executables is relatively small and this isn’t an impossible task. Thus, Unix file formats have evolved over time. The original format was known as a.out (Assembler OUTput). The next format, still used on some commercial systems, is known as COFF (Common Object File Format), and the current, most widely used format is ELF (Extensible Linking Format). Modern GNU/Linux systems use ELF. The kernel recognizes that an executable file contains binary object code by looking at the first few bytes of the file for special magic numbers. These are sequences of two or four bytes that the kernel recognizes as being special. For backwards compatibility, modern Unix systems recognize multiple formats. ELF files begin with the four characters "\177ELF". Besides binary executables, the kernel also supports executable scripts. Such a file also begins with a magic number: in this case, the two regular characters #!. A script is a program executed by an interpreter, such as the shell, awk, Perl, Python, or Tcl. The #! line provides the full path to the interpreter and, optionally, one single argument: #! /bin/awk -f BEGIN { print "hello, world" }
Let’s assume the above contents are in a file named hello.awk and that the file is executable. When you type ‘hello.awk’, the kernel runs the program as if you had typed ‘/bin/awk -f hello.awk’. Any additional command-line arguments are also passed on to the program. In this case, awk runs the program and prints the universally known hello, world message. The #! mechanism is an elegant way of hiding the distinction between binary executables and script executables. If hello.awk is renamed to just hello, the user typing
1.1 The Linux/Unix File Model
9
‘hello’ can’t tell (and indeed shouldn’t have to know) that hello isn’t a binary executable program.
1.1.4 Devices One of Unix’s most notable innovations was the unification of file I/O and device I/O.4 Devices appear as files in the filesystem, regular permissions apply to their access, and the same I/O system calls are used for opening, reading, writing, and closing them. All of the “magic” to make devices look like files is hidden in the kernel. This is just another aspect of the driving simplicity principle in action: We might phrase it as no special cases for user code. Two devices appear frequently in everyday use, particularly at the shell level: /dev/null and /dev/tty. /dev/null is the “bit bucket.” All data sent to /dev/null is discarded by the oper-
ating system, and attempts to read from it always return end-of-file (EOF) immediately. is the process’s current controlling terminal—the one to which it listens when a user types the interrupt character (typically CTRL-C) or performs job control (CTRL-Z). /dev/tty
GNU/Linux systems, and many modern Unix systems, supply /dev/stdin, /dev/stdout, and /dev/stderr devices, which provide a way to name the open files each process inherits upon startup. Other devices represent real hardware, such as tape and disk drives, CD-ROM drives, and serial ports. There are also software devices, such as pseudo-ttys, that are used for networking logins and windowing systems. /dev/console represents the system console, a particular hardware device on minicomputers. On modern computers, /dev/console is the screen and keyboard, but it could be a serial port. Unfortunately, device-naming conventions are not standardized, and each operating system has different names for tapes, disks, and so on. (Fortunately, that’s not an issue for what we cover in this book.) Devices have either a b or c in the first character of ‘ls -l’ output:
4 This feature first appeared in Multics, but Multics was never widely used.
Chapter 1 • Introduction
10
$ ls -l /dev/tty /dev/hda brw-rw---1 root disk crw-rw-rw1 root root
3, 5,
0 Aug 31 02:31 /dev/hda 0 Feb 26 08:44 /dev/tty
The initial b represents block devices, and a c represents character devices. Device files are discussed further in Section 5.4, “Obtaining Information about Files,” page 139.
1.2 The Linux/Unix Process Model A process is a running program.5 Processes have the following attributes: • A unique process identifier (the PID) • A parent process (with an associated identifier, the PPID) • Permission identifiers (UID, GID, groupset, and so on) • An address space, separate from those of all other processes • A program running in that address space • A current working directory (‘.’) • A current root directory (/; changing this is an advanced topic) • A set of open files, directories, or both • A permissions-to-deny mask for use in creating new files • A set of strings representing the environment • A scheduling priority (an advanced topic) • Settings for signal disposition (an advanced topic) • A controlling terminal (also an advanced topic) When the main() function begins execution, all of these things have already been put in place for the running program. System calls are available to query and change each of the above items; covering them is the purpose of this book. New processes are always created by an existing process. The existing process is termed the parent, and the new process is termed the child. Upon booting, the kernel handcrafts the first, primordial process, which runs the program /sbin/init; it has process ID
5 Processes can be suspended, in which case they are not “running”; however, neither are they terminated. In any
case, in the early stages of the climb up the learning curve, it pays not to be too pedantic.
1.2 The Linux/Unix Process Model
1 and serves several administrative functions. init. (init’s parent is the kernel, often listed
11
All other processes are descendants of as process ID 0.)
The child-to-parent relationship is one-to-one; each process has only one parent, and thus it’s easy to find out the PID of the parent. The parent-to-child relationship is one-to-many; any given process can create a potentially unlimited number of children. Thus, there is no easy way for a process to find out the PIDs of all its children. (In practice, it’s not necessary, anyway.) A parent process can arrange to be notified when a child process terminates (“dies”), and it can also explicitly wait for such an event. Each process’s address space (memory) is separate from that of every other. Unless two processes have made explicit arrangement to share memory, one process cannot affect the address space of another. This is important; it provides a basic level of security and system reliability. (For efficiency, the system arranges to share the read-only executable code of the same program among all the processes running that program. This is transparent to the user and to the running program.) The current working directory is the one to which relative pathnames (those that don’t start with a /) are relative. This is the directory you are “in” whenever you issue a ‘cd someplace’ command to the shell. By convention, all programs start out with three files already open: standard input, standard output, and standard error. These are where input comes from, output goes to, and error messages go to, respectively. In the course of this book, we will see how these are put in place. A parent process can open additional files and have them already available for a child process; the child will have to know they’re there, either by way of some convention or by a command-line argument or environment variable. The environment is a set of strings, each of the form ‘name=value’. Functions exist for querying and setting environment variables, and child processes inherit the environment of their parents. Typical environment variables are things like PATH and HOME in the shell. Many programs look for the existence and value of specific environment variables in order to control their behavior. It is important to understand that a single process may execute multiple programs during its lifetime. Unless explicitly changed, all of the other system-maintained attributes (current directory, open files, PID, etc.) remain the same. The separation of “starting a new process” from “choosing which program to run” is a key Unix innovation.
12
Chapter 1 • Introduction
It makes many operations simple and straightforward. Other operating systems that combine the two operations are less general and more complicated to use.
1.2.1 Pipes: Hooking Processes Together You’ve undoubtedly used the pipe construct (‘|’) in the shell to connect two or more running programs. A pipe acts like a file: One process writes to it using the normal write operation, and the other process reads from it using the read operation. The processes don’t (usually) know that their input/output is a pipe and not a regular file. Just as the kernel hides the “magic” for devices, making them act like regular files, so too the kernel does the work for pipes, arranging to pause the pipe’s writer when the pipe fills up and to pause the reader when no data is waiting to be read. The file I/O paradigm with pipes thus acts as a key mechanism for connecting running programs; no temporary files are needed. Again, generality and simplicity at work: no special cases for user code.
1.3 Standard C vs. Original C For many years, the de facto definition of C was found in the first edition of the book The C Programming Language, by Brian Kernighan and Dennis Ritchie. This book described C as it existed for Unix and on the systems to which the Bell Labs developers had ported it. Throughout this book, we refer to it as “Original C,” although it’s also common for it to be referred to as “K&R C,” after the book’s two authors. (Dennis Ritchie designed and implemented C.) The 1990 ISO Standard for C formalized the language’s definition, including the functions in the C library (such as printf() and fopen()). The C standards committee did an admirable job of standardizing existing practice and avoided inventing new features, with one notable exception (and a few minor ones). The most visible change in the language was the use of function prototypes, borrowed from C++. Standard C, C++, and the Java programming language use function prototypes for function declarations and definitions. A prototype describes not only the function’s return value but also the number and type of its arguments. With prototypes, a compiler can do complete type checking at the point of a function call:
1.3 Standard C vs. Original C
13
extern int myfunc(struct my_struct *a, struct my_struct *b, double c, int d);
Declaration
int myfunc(struct my_struct *a, struct my_struct *b, double c, int d) { ... }
Definition
... struct my_struct s, t; int j; ... /* Function call, somewhere else: */ j = my_func(& s, & t, 3.1415, 42);
This function call is fine. But consider an erroneous call: j = my_func(-1, -2, 0);
Wrong number and types of arguments
The compiler can immediately diagnose this call as being invalid. However, in Original C, functions are declared without the argument list being specified: extern int myfunc();
Returns int, arguments unknown
Furthermore, function definitions list the parameter names in the function header, and then declare the parameters before the function body. Parameters of type int don’t have to be declared, and if a function returns int, that doesn’t have to be declared either: myfunc(a, b, c, d) struct my_struct *a, *b; double c; { ... }
Return type is int Note, no declaration of parameter d
Consider again the same erroneous function call: ‘j = my_func(-1, -2, 0);’. In Original C, the compiler has no way of knowing that you’ve (accidentally, we assume) passed the wrong arguments to my_func(). Such erroneous calls generally lead to hardto-find runtime problems (such as segmentation faults, whereby the program dies), and the Unix lint program was created to deal with these kinds of things. So, although function prototypes were a radical departure from existing practice, their additional type checking was deemed too important to be without, and they were added to the language with little opposition.
14
Chapter 1 • Introduction
In 1990 Standard C, code written in the original style, for both declarations and definitions, is valid. This makes it possible to continue to compile millions of lines of existing code with a standard-conforming compiler. New code, obviously, should be written with prototypes because of the improved possibilities for compile-time error checking. 1999 Standard C continues to allow original style declarations and definitions. However, the “implicit int” rule was removed; functions must have a return type, and all parameters must be declared. Furthermore, when a program called a function that had not been formally declared, Original C would create an implicit declaration for the function, giving it a return type of int. 1990 Standard C did the same, additionally noting that it had no information about the parameters. 1999 Standard C no longer provides this “auto-declare” feature. Other notable additions in Standard C are the const keyword, also from C++, and the volatile keyword, which the committee invented. For the code you’ll see in this book, understanding the different function declaration and definition syntaxes is the most important thing. For V7 code using original style definitions, we have added comments showing the equivalent prototype. Otherwise, we have left the code alone, preferring to show it exactly as it was originally written and as you’ll see it if you download the code yourself. Although 1999 C adds some additional keywords and features beyond the 1990 version, we have chosen to stick to the 1990 dialect, since C99 compilers are not yet commonplace. Practically speaking, this doesn’t matter: C89 code should compile and run without change when a C99 compiler is used, and the new C99 features don’t affect our discussion or use of the fundamental Linux/Unix APIs.
1.4 Why GNU Programs Are Better What is it that makes a GNU program a GNU program?6 What makes GNU software “better” than other (free or non-free) software? The most obvious difference is the GNU General Public License (GPL), which describes the distribution terms for GNU software. But this is usually not the reason you hear people saying “Get the GNU version of xyz, 6 This section is adapted from an article by the author that appeared in Issue 16 of Linux Journal. (See http://www.linuxjournal.com/article.php?sid=1135.) Reprinted and adapted by permission.
1.4 Why GNU Programs Are Better
15
it’s much better.” GNU software is generally more robust, and performs better, than standard Unix versions. In this section we look at some of the reasons why, and at the document that describes the principles of GNU software design. The GNU Coding Standards describes how to write software for the GNU project. It covers a range of topics. You can read the GNU Coding Standards online at http://www.gnu.org/prep/standards.html. See the online version for pointers to the source files in other formats. In this section, we describe only those parts of the GNU Coding Standards that relate to program design and implementation.
1.4.1 Program Design Chapter 3 of the GNU Coding Standards provides general advice about program design. The four main issues are compatibility (with standards and Unix), the language to write in, reliance on nonstandard features of other programs (in a word, “none”), and the meaning of “portability.” Compatibility with Standard C and POSIX, and to a lesser extent, with Berkeley Unix is an important goal. But it’s not an overriding one. The general idea is to provide all necessary functionality, with command-line switches to provide a strict ISO or POSIX mode. C is the preferred language for writing GNU software since it is the most commonly available language. In the Unix world, Standard C is now common, but if you can easily support Original C, you should do so. Although the coding standards prefer C over C++, C++ is now commonplace too. One widely used GNU package written in C++ is groff (GNU troff). With GCC supporting C++, it has been our experience that installing groff is not difficult. The standards state that portability is a bit of a red herring. GNU utilities are ultimately intended to run on the GNU kernel with the GNU C Library.7 But since the kernel isn’t finished yet and users are using GNU tools on non-GNU systems, portability is desirable, just not paramount. The standard recommends using Autoconf for achieving portability among different Unix systems.
7 This statement refers to the HURD kernel, which is still under development (as of early 2004). GCC and GNU
C Library (GLIBC) development take place mostly on Linux-based systems today.
16
Chapter 1 • Introduction
1.4.2 Program Behavior Chapter 4 of the GNU Coding Standards provides general advice about program behavior. We will return to look at one of its sections in detail, below. The chapter focuses on program design, formatting error messages, writing libraries (by making them reentrant), and standards for the command-line interface. Error message formatting is important since several tools, notably Emacs, use the error messages to help you go straight to the point in the source file or data file at which an error occurred. GNU utilities should use a function named getopt_long() for processing the command line. This function provides command-line option parsing for both traditional Unix-style options (‘gawk -F: ...’) and GNU-style long options (‘gawk --field-separator=: ...’). All programs should provide --help and --version options, and when a long name is used in one program, it should be used the same way in other GNU programs. To this end, there is a rather exhaustive list of long options used by current GNU programs. As a simple yet obvious example, --verbose is spelled exactly the same way in all GNU programs. Contrast this to -v, -V, -d, etc., in many Unix programs. Most of Chapter 2, “Arguments, Options, and the Environment,” page 23, is devoted to the mechanics of argument and option parsing.
1.4.3 C Code Programming The most substantive part of the GNU Coding Standards is Chapter 5, which describes how to write C code, covering things like formatting the code, correct use of comments, using C cleanly, naming your functions and variables, and declaring, or not declaring, standard system functions that you wish to use. Code formatting is a religious issue; many people have different styles that they prefer. We personally don’t like the FSF’s style, and if you look at gawk, which we maintain, you’ll see it’s formatted in standard K&R style (the code layout style used in both editions of the Kernighan and Ritchie book). But this is the only variation in gawk from this part of the coding standards. Nevertheless, even though we don’t like the FSF’s style, we feel that when modifying some other program, sticking to the coding style already used is of the utmost importance. Having a consistent coding style is more important than which coding style you
1.4 Why GNU Programs Are Better
17
pick. The GNU Coding Standards also makes this point. (Sometimes, there is no detectable consistent coding style, in which case the program is probably overdue for a trip through either GNU indent or Unix’s cb.) What we find important about the chapter on C coding is that the advice is good for any C coding, not just if you happen to be working on a GNU program. So, if you’re just learning C or even if you’ve been working in C (or C++) for a while, we recommend this chapter to you since it encapsulates many years of experience.
1.4.4 Things That Make a GNU Program Better We now examine the section titled Writing Robust Programs in Chapter 4, Program Behavior for All Programs, of the GNU Coding Standards. This section provides the principles of software design that make GNU programs better than their Unix counterparts. We quote selected parts of the chapter, with some examples of cases in which these principles have paid off. Avoid arbitrary limits on the length or number of any data structure, including file names, lines, files, and symbols, by allocating all data structures dynamically. In most Unix utilities, “long lines are silently truncated.” This is not acceptable in a GNU utility. This rule is perhaps the single most important rule in GNU software design—no arbitrary limits. All GNU utilities should be able to manage arbitrary amounts of data. While this requirement perhaps makes it harder for the programmer, it makes things much better for the user. At one point, we had a gawk user who regularly ran an awk program on more than 650,000 files (no, that’s not a typo) to gather statistics. gawk would grow to over 192 megabytes of data space, and the program ran for around seven CPU hours. He would not have been able to run his program using another awk implementation.8 Utilities reading files should not drop NUL characters, or any other nonprinting characters including those with codes above 0177. The only sensible exceptions would be utilities specifically intended for interface to certain types of terminals or printers that can’t handle those characters. 8 This situation occurred circa 1993; the truism is even more obvious today, as users process gigabytes of log files with gawk.
Chapter 1 • Introduction
18
It is also well known that Emacs can edit any arbitrary file, including files containing binary data! Whenever possible, try to make programs work properly with sequences of bytes that represent multibyte characters, using encodings such as UTF-8 and others.9 Check every system call for an error return, unless you know you wish to ignore errors. Include the system error text (from perror or equivalent) in every error message resulting from a failing system call, as well as the name of the file if any and the name of the utility. Just “cannot open foo.c” or “stat failed” is not sufficient. Checking every system call provides robustness. This is another case in which life is harder for the programmer but better for the user. An error message detailing what exactly went wrong makes finding and solving any problems much easier.10 Finally, we quote from Chapter 1 of the GNU Coding Standards, which discusses how to write your program differently from the way a Unix program may have been written. For example, Unix utilities were generally optimized to minimize memory use; if you go for speed instead, your program will be very different. You could keep the entire input file in core and scan it there instead of using stdio. Use a smarter algorithm discovered more recently than the Unix program. Eliminate use of temporary files. Do it in one pass instead of two (we did this in the assembler). Or, on the contrary, emphasize simplicity instead of speed. For some applications, the speed of today’s computers makes simpler algorithms adequate. Or go for generality. For example, Unix programs often have static tables or fixed-size strings, which make for arbitrary limits; use dynamic allocation instead. Make sure your program handles NULs and other funny characters in the input files. Add a programming language for extensibility and write part of the program in that language. 9 Section 13.4, “Can You Spell That for Me, Please?”, page 521, provides an overview of multibyte characters and
encodings. 10 The mechanics of checking for and reporting errors are discussed in Section 4.3, “Determining What Went
Wrong,” page 86.
1.5 Portability Revisited
19
Or turn some parts of the program into independently usable libraries. Or use a simple garbage collector instead of tracking precisely when to free memory, or use a new GNU facility such as obstacks. An excellent example of the difference an algorithm can make is GNU diff. One of our system’s early incarnations was an AT&T 3B1: a system with a MC68010 processor, a whopping two megabytes of memory and 80 megabytes of disk. We did (and do) lots of editing on the manual for gawk, a file that is almost 28,000 lines long (although at the time, it was only in the 10,000-lines range). We used to use ‘diff -c’ quite frequently to look at our changes. On this slow system, switching to GNU diff made a stunning difference in the amount of time it took for the context diff to appear. The difference is almost entirely due to the better algorithm that GNU diff uses. The final paragraph mentions the idea of structuring a program as an independently usable library, with a command-line wrapper or other interface around it. One example of this is GDB, the GNU debugger, which is partially implemented as a command-line tool on top of a debugging library. (The separation of the GDB core functionality from the command interface is an ongoing development project.) This implementation makes it possible to write a graphical debugging interface on top of the basic debugging functionality.
1.4.5 Parting Thoughts about the “GNU Coding Standards” The GNU Coding Standards is a worthwhile document to read if you wish to develop new GNU software, enhance existing GNU software, or just learn how to be a better programmer. The principles and techniques it espouses are what make GNU software the preferred choice of the Unix community.
1.5 Portability Revisited Portability is something of a holy grail; always sought after, but not always obtainable, and certainly not easily. There are several aspects to writing portable code. The GNU Coding Standards discusses many of them. But there are others as well. Keep portability in mind at both higher and lower levels as you develop. We recommend these practices: Code to standards. Although it can be challenging, it pays to be familiar with the formal standards for the language you’re using. In particular, pay attention to the 1990 and 1999
Chapter 1 • Introduction
20
ISO standards for C and the 2003 standard for C++ since most Linux programming is done in one of those two languages. Also, the POSIX standard for library and system call interfaces, while large, has broad industry support. Writing to POSIX greatly improves the chances of successfully moving your code to other systems besides GNU/Linux. This standard is quite readable; it distills decades of experience and good practice. Pick the best interface for the job. If a standard interface does what you need, use it in your code. Use Autoconf to detect an unavailable interface, and supply a replacement version of it for deficient systems. (For example, some older systems lack the memmove() function, which is fairly easy to code by hand or to pull from the GLIBC library.) Isolate portability problems behind new interfaces. Sometimes, you may need to do operating-system-specific tasks that apply on some systems but not on others. (For example, on some systems, each program has to expand command-line wildcards instead of the shell doing it.) Create a new interface that does nothing on systems that don’t need it but does the correct thing on systems that do. Use Autoconf for configuration. Avoid #ifdef if possible. If not, bury it in low-level library code. Use Autoconf to do the checking for the tests to be performed with #ifdef.
1.6 Suggested Reading 1. The C Programming Language, 2nd edition, by Brian W. Kernighan and Dennis M. Ritchie. Prentice-Hall, Englewood Cliffs, New Jersey, USA, 1989. ISBN: 0-13-110370-9. This is the “bible” for C, covering the 1990 version of Standard C. It is a rather dense book, with lots of information packed into a startlingly small number of pages. You may need to read it through more than once; doing so is well worth the trouble. 2. C, A Reference Manual, 5th edition, by Samuel P. Harbison III and Guy L. Steele, Jr. Prentice-Hall, Upper Saddle River, New Jersey, USA, 2002. ISBN: 0-13-089592-X.
1.7 Summary
21
This book is also a classic. It covers Original C as well as the 1990 and 1999 standards. Because it is current, it makes a valuable companion to The C Programming Language. It covers many important items, such as internationalization-related types and library functions, that aren’t in the Kernighan and Ritchie book. 3. Notes on Programming in C, by Rob Pike, February 21, 1989. Available on the Web from many sites. Perhaps the most widely cited location is http://www.lysator.liu.se/c/pikestyle.html. (Many other useful articles are available from one level up: http://www.lysator.liu.se/c/.) Rob Pike worked for many years at the Bell Labs research center where C and Unix were invented and did pioneering development there. His notes distill many years of experience into a “philosophy of clarity in programming” that is well worth reading. 4. The various links at http://www.chris-lott.org/resources/cstyle/. This site includes Rob Pike’s notes and several articles by Henry Spencer. Of particular note is the Recommended C Style and Coding Standards, originally written at the Bell Labs Indian Hill site.
1.7 Summary • “Files and processes” summarizes the Linux/Unix worldview. The treatment of files as byte streams and devices as files, and the use of standard input, output, and error, simplify program design and unify the data access model. The permissions model is simple, yet flexible, applying to both files and directories. • Processes are running programs that have user and group identifiers associated with them for permission checking, as well as other attributes such as open files and a current working directory. • The most visible difference between Standard C and Original C is the use of function prototypes for stricter type checking. A good C programmer should be able to read Original-style code, since many existing programs use it. New code should be written using prototypes. • The GNU Coding Standards describe how to write GNU programs. They provide numerous valuable techniques and guiding principles for producing robust, usable
Chapter 1 • Introduction
22
software. The “no arbitrary limits” principle is perhaps the single most important of these. This document is required reading for serious programmers. • Making programs portable is a significant challenge. Guidelines and tools help, but ultimately experience is needed too.
Exercises 1. Read and comment on the article “The GNU Project”,11 by Richard M. Stallman, originally written in August of 1998.
11 http://www.gnu.org/gnu/thegnuproject.html
2 Arguments, Options, and the Environment
In this chapter 2.1 Option and Argument Conventions
page 24
2.2 Basic Command-Line Processing
page 28
2.3 Option Parsing: getopt() and getopt_long()
page 30
2.4 The Environment
page 40
2.5 Summary
page 49
Exercises
page 50
23
ommand-line option and argument interpretation is usually the first task of any program. This chapter examines how C (and C++) programs access their command-line arguments, describes standard routines for parsing options, and takes a look at the environment.
C
2.1 Option and Argument Conventions The word arguments has two meanings. The more technical definition is “all the ‘words’ on the command line.” For example: $ ls main.c opts.c process.c
Here, the user typed four “words.” All four words are made available to the program as its arguments. The second definition is more informal: Arguments are all the words on the command line except the command name. By default, Unix shells separate arguments from each other with whitespace (spaces or TAB characters). Quoting allows arguments to include whitespace: $ echo here are lots of spaces here are lots of spaces $ echo "here are lots of spaces" here are lots of spaces
The shell ‘‘eats’’ the spaces Spaces are preserved
Quoting is transparent to the running program; echo never sees the double-quote characters. (Double and single quotes are different in the shell; a discussion of the rules is beyond the scope of this book, which focuses on C programming.) Arguments can be further classified as options or operands. In the previous two examples all the arguments were operands: files for ls and raw text for echo. Options are special arguments that each program interprets. Options change a program’s behavior, or they provide information to the program. By ancient convention, (almost) universally adhered to, options start with a dash (a.k.a. hyphen, minus sign) and consist of a single letter. Option arguments are information needed by an option, as opposed to regular operand arguments. For example, the fgrep program’s -f option means “use the contents of the following file as a list of strings to search for.” See Figure 2.1.
24
2.1 Option and Argument Conventions
25
Command name Option Option argument Operands
fgrep -f patfile foo.c bar.c baz.c
FIGURE 2.1 Command-line components
Thus, patfile is not a data file to search, but rather it’s for use by fgrep in defining the list of strings to search for.
2.1.1 POSIX Conventions The POSIX standard describes a number of conventions that standard-conforming programs adhere to. Nothing requires that your programs adhere to these standards, but it’s a good idea for them to do so: Linux and Unix users the world over understand and use these conventions, and if your program doesn’t follow them, your users will be unhappy. (Or you won’t have any users!) Furthermore, the functions we discuss later in this chapter relieve you of the burden of manually adhering to these conventions for each program you write. Here they are, paraphrased from the standard: 1. Program names should have no less than two and no more than nine characters. 2. Program names should consist of only lowercase letters and digits. 3. Option names should be single alphanumeric characters. Multidigit options should not be allowed. For vendors implementing the POSIX utilities, the -W option is reserved for vendor-specific options. 4. All options should begin with a ‘-’ character. 5. For options that don’t require option arguments, it should be possible to group multiple options after a single ‘-’ character. (For example, ‘foo -a -b -c’ and ‘foo -abc’ should be treated the same way.) 6. When an option does require an option argument, the argument should be separated from the option by a space (for example, ‘fgrep -f patfile’).
26
Chapter 2 • Arguments, Options, and the Environment
The standard, however, does allow for historical practice, whereby sometimes the option and the operand could be in the same string: ‘fgrep -fpatfile’. In practice, the getopt() and getopt_long() functions interpret ‘-fpatfile’ as ‘-f patfile’, not as ‘-f -p -a -t ...’. 7. Option arguments should not be optional. This means that when a program documents an option as requiring an option argument, that option’s argument must always be present or else the program will fail. GNU getopt() does provide for optional option arguments since they’re occasionally useful. 8. If an option takes an argument that may have multiple values, the program should receive that argument as a single string, with values separated by commas or whitespace. For example, suppose a hypothetical program myprog requires a list of users for its -u option. Then, it should be invoked in one of these two ways: myprog -u "arnold,joe,jane" myprog -u "arnold joe jane"
Separate with commas Separate with whitespace
In such a case, you’re on your own for splitting out and processing each value (that is, there is no standard routine), but doing so manually is usually straightforward. 9. Options should come first on the command line, before operands. Unix versions of getopt() enforce this convention. GNU getopt() does not by default, although you can tell it to. 10. The special argument ‘--’ indicates the end of all options. Any subsequent arguments on the command line are treated as operands, even if they begin with a dash. 11. The order in which options are given should not matter. However, for mutually exclusive options, when one option overrides the setting of another, then (so to speak) the last one wins. If an option that has arguments is repeated, the program should process the arguments in order. For example, ‘myprog -u arnold -u jane’ is the same as ‘myprog -u "arnold,jane"’. (You have to enforce this yourself; getopt() doesn’t help you.) 12. It is OK for the order of operands to matter to a program. Each program should document such things.
2.1 Option and Argument Conventions
27
13. Programs that read or write named files should treat the single argument ‘-’ as meaning standard input or standard output, as is appropriate for the program. Note that many standard programs don’t follow all of the above conventions. The primary reason is historical compatibility; many such programs predate the codifying of these conventions.
2.1.2 GNU Long Options As we saw in Section 1.4.2, “Program Behavior,” page 16, GNU programs are encouraged to use long options of the form --help, --verbose, and so on. Such options, since they start with ‘--’, do not conflict with the POSIX conventions. They also can be easier to remember, and they provide the opportunity for consistency across all GNU utilities. (For example, --help is the same everywhere, as compared with -h for “help,” -i for “information,” and so on.) GNU long options have their own conventions, implemented by the getopt_long() function: 1. For programs implementing POSIX utilities, every short (single-letter) option should also have a long option. 2. Additional GNU-specific long options need not have a corresponding short option, but we recommend that they do. 3. Long options can be abbreviated to the shortest string that remains unique. For example, if there are two options --verbose and --verbatim, the shortest possible abbreviations are --verbo and --verba. 4. Option arguments are separated from long options either by whitespace or by an = sign. For example, --sourcefile=/some/file or --sourcefile /some/file. 5. Options and arguments may be interspersed with operands on the command line; getopt_long() will rearrange things so that all options are processed and then all operands are available sequentially. (This behavior can be suppressed.) 6. Option arguments can be optional. For such options, the argument is deemed to be present if it’s in the same string as the option. This works only for short options. For example, if -x is such an option, given ‘foo -xYANKEES -y’, the argument to -x is ‘YANKEES’. For ‘foo -x -y’, there is no argument to -x.
Chapter 2 • Arguments, Options, and the Environment
28
7. Programs can choose to allow long options to begin with a single dash. (This is common with many X Window programs.) Much of this will become clearer when we examine getopt_long() later in the chapter. The GNU Coding Standards devotes considerable space to listing all the long and short options used by GNU programs. If you’re writing a program that accepts long options, see if option names already in use might make sense for you to use as well.
2.2 Basic Command-Line Processing A C program accesses its command-line arguments through its parameters, argc and argv. The argc parameter is an integer, indicating the number of arguments there are, including the command name. There are two common ways to declare main(), varying in how argv is declared: int main(int argc, char *argv[]) { ... }
int main(int argc, char **argv) { ... }
Practically speaking, there’s no difference between the two declarations, although the first is conceptually clearer: argv is an array of pointers to characters. The second is more commonly used: argv is a pointer to a pointer. Also, the second definition is technically more correct, and it is what we use. Figure 2.2 depicts this situation. char **
char *
*
*
"cat"
*
"file1"
*
"file2"
argv
NULL
C strings, terminated with'\0'
NULL pointer, binary zero
FIGURE 2.2 Memory for argv
By convention, argv[0] is the program’s name. (For details, see Section 9.1.4.3, “Program Names and argv[0],” page 297.) Subsequent entries are the command line arguments. The final entry in the argv array is a NULL pointer.
2.2 Basic Command-Line Processing
29
argc indicates how many arguments there are; since C is zero-based, it is always true that ‘argv[argc] == NULL’. Because of this, particularly in Unix code, you will see different ways of checking for the end of arguments, such as looping until a counter is greater than or equal to argc, or until ‘argv[i] == 0’ or while ‘*argv != NULL’ and so on. These are all equivalent.
2.2.1 The V7 echo Program Perhaps the simplest example of command-line processing is the V7 echo program, which prints its arguments to standard output, separated by spaces and terminated with a newline. If the first argument is -n, then the trailing newline is omitted. (This is used for prompting from shell scripts.) Here’s the code:1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
#include main(argc, argv) int argc; char *argv[]; { register int i, nflg;
int main(int argc, char **argv)
nflg = 0; if(argc > 1 && argv[1][0] == '-' && argv[1][1] == 'n') { nflg++; argc--; argv++; } for(i=1; inextp;\ else n = more_nodes()
#define freenode(n)
((n)->flags = 0, (n)->exec_count = 0,\ (n)->nextp = nextfree, nextfree = (n))
The nextfree variable points to a linked list of NODE structures. The getnode() macro pulls the first structure off the list if one is there. Otherwise, it calls more_nodes() to allocate a new list of free NODEs. The freenode() macro releases a NODE by putting it at the head of the list. NOTE When first writing your application, do it the simple way: use malloc() and free() directly. If and only if profiling your program shows you that it’s spending a significant amount of time in the memory-allocation functions should you consider writing a private allocator.
3.2.1.8 Example: Reading Arbitrarily Long Lines Since this is, after all, Linux Programming by Example, it’s time for a real-life example. The following code is the readline() function from GNU Make 3.80
Chapter 3 • User-Level Memory Management
68
(ftp://ftp.gnu.org/gnu/make/make-3.80.tar.gz). It can be found in the file read.c. Following the “no arbitrary limits” principle, lines in a Makefile can be of any length. Thus, this routine’s primary job is to read lines of any length and make sure that they fit into the buffer being used. A secondary job is to deal with continuation lines. As in C, lines that end with a backslash logically continue to the next line. The strategy used is to maintain a buffer. As many lines as will fit in the buffer are kept there, with pointers keeping track of the start of the buffer, the current line, and the next line. Here is the structure: struct ebuffer { char *buffer; char *bufnext; char *bufstart; unsigned int size; FILE *fp; struct floc floc; };
/* /* /* /* /* /*
Start of the current line in the buffer. */ Start of the next line in the buffer. */ Start of the entire buffer. */ Malloc'd size of buffer. */ File, or NULL if this is an internal buffer. Info on the file in fp (if any). */
*/
The size field tracks the size of the entire buffer, and fp is the FILE pointer for the input file. The floc structure isn’t of interest for studying the routine. The function returns the number of lines in the buffer. (The line numbers here are relative to the start of the function, not the source file.) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
static long readline (ebuf) struct ebuffer *ebuf; { char *p; char *end; char *start; long nlines = 0;
static long readline(struct ebuffer *ebuf)
/* The behaviors between string and stream buffers are different enough to warrant different functions. Do the Right Thing. */ if (!ebuf->fp) return readstring (ebuf); /* When reading from a file, we always start over at the beginning of the buffer for each new line. */ p = start = ebuf->bufstart; end = p + ebuf->size; *p = '\0';
3.2 Memory Allocation
69
We start by noticing that GNU Make is written in K&R C for maximal portability. The initial part declares variables, and if the input is coming from a string (such as from the expansion of a macro), the code hands things off to a different function, readstring() (lines 13 and 14). The test ‘!ebuf->fp’ (line 13) is a shorter (and less clear, in our opinion) test for a null pointer; it’s the same as ‘ebuf->fp == NULL’. Lines 19–21 initialize the pointers, and insert a NUL byte, which is the C string terminator character, at the end of the buffer. The function then starts a loop (lines 23–95), which runs as long as there is more input. 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41
while (fgets (p, end - p, ebuf->fp) != 0) { char *p2; unsigned long len; int backslash; len = strlen (p); if (len == 0) { /* This only happens when the first thing on the line is a '\0'. It is a pretty hopeless case, but (wonder of wonders) Athena lossage strikes again! (xmkmf puts NULs in its makefiles.) There is nothing really to be done; we synthesize a newline so the following line doesn't appear to be part of this line. */ error (&ebuf->floc, _("warning: NUL character seen; rest of line ignored")); p[0] = '\n'; len = 1; }
The fgets() function (line 23) takes a pointer to a buffer, a count of bytes to read, and a FILE * variable for the file to read from. It reads one less than the count so that it can terminate the buffer with '\0'. This function is good since it allows you to avoid buffer overflows. It stops upon encountering a newline or end-of-file, and if the newline is there, it’s placed in the buffer. It returns NULL on failure or the (pointer) value of the first argument on success. In this case, the arguments are a pointer to the free area of the buffer, the amount of room left in the buffer, and the FILE pointer to read from. The comment on lines 32–36 is self-explanatory; if a zero byte is encountered, the program prints an error message and pretends it was an empty line. After compensating for the NUL byte (lines 30–41), the code continues.
70
43 44 45 46 47 48 49 50 51 52
Chapter 3 • User-Level Memory Management
/* Jump past the text we just read. p += len;
*/
/* If the last char isn't a newline, the whole line didn't fit into the buffer. Get some more buffer and try again. */ if (p[-1] != '\n') goto more_buffer; /* We got a newline, so add one to the count of lines. ++nlines;
*/
Lines 43–52 increment the pointer into the buffer past the data just read. The code then checks whether the last character read was a newline. The construct p[-1] (line 48) looks at the character in front of p, just as p[0] is the current character and p[1] is the next. This looks strange at first, but if you translate it into terms of pointer math, *(p-1), it makes more sense, and the indexing form is possibly easier to read. If the last character was not a newline, this means that we’ve run out of space, and the code goes off (with goto) to get more (line 49). Otherwise, the line count is incremented. 54 55 56 57 58 59 60 61 62
#if !defined(WINDOWS32) && !defined(_ _MSDOS_ _) /* Check to see if the line was really ended with CRLF; if so ignore the CR. */ if ((p - start) > 1 && p[-2] == '\r') { --p; p[-1] = '\n'; } #endif
Lines 54–62 deal with input lines that follow the Microsoft convention of ending with a Carriage Return-Line Feed (CR-LF) combination, and not just a Line Feed (or newline), which is the Linux/Unix convention. Note that the #ifdef excludes the code on Microsoft systems; apparently the library on those systems handles this conversion automatically. This is also true of other non-Unix systems that support Standard C. 64 65 66 67 68 69 70 71
backslash = 0; for (p2 = p - 2; p2 >= start; --p2) { if (*p2 != '\\') break; backslash = !backslash; }
3.2 Memory Allocation
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95
71
if (!backslash) { p[-1] = '\0'; break; } /* It was a backslash/newline combo. another line. */ if (end - p >= 80) continue;
If we have more space, read
/* We need more space at the end of our buffer, so realloc it. Make sure to preserve the current offset of p. */ more_buffer: { unsigned long off = p - start; ebuf->size *= 2; start = ebuf->buffer = ebuf->bufstart = (char *) xrealloc (start, ebuf->size); p = start + off; end = start + ebuf->size; *p = '\0'; } }
So far we’ve dealt with the mechanics of getting at least one complete line into the buffer. The next chunk handles the case of a continuation line. It has to make sure, though, that the final backslash isn’t part of multiple backslashes at the end of the line. It tracks whether the total number of such backslashes is odd or even by toggling the backslash variable from 0 to 1 and back. (Lines 64–70.) If the number is even, the test ‘! backslash’ (line 72) will be true. In this case, the final newline is replaced with a NUL byte, and the code leaves the loop. On the other hand, if the number is odd, then the line contained an even number of backslash pairs (representing escaped backslashes, \\ as in C), and a final backslashnewline combination.5 In this case, if at least 80 free bytes are left in the buffer, the program continues around the loop to read another line (lines 78–81). (The use of the magic number 80 isn’t great; it would have been better to define and use a symbolic constant.)
5 This code has the scent of practical experience about it: It wouldn’t be surprising to learn that earlier versions
simply checked for a final backslash before the newline, until someone complained that it didn’t work when there were multiple backslashes at the end of the line.
Chapter 3 • User-Level Memory Management
72
Upon reaching line 83, the program needs more space in the buffer. Here’s where the dynamic memory management comes into play. Note the comment about preserving p (lines 83–84); we discussed this earlier in terms of reinitializing pointers into dynamic memory. end is also reset. Line 89 resizes the memory. Note that here the function being called is xrealloc(). Many GNU programs use “wrapper” functions around malloc() and realloc() that automatically print an error message and exit if the standard routines return NULL. Such a wrapper might look like this: extern const char *myname;
/* set in main() */
void *xrealloc(void *ptr, size_t amount) { void *p = realloc(ptr, amount); if (p == NULL) { fprintf(stderr, "%s: out of memory!\n", myname); exit(1); } }
Thus, if xrealloc() returns, it’s guaranteed to return a valid pointer. (This strategy complies with the “check every call for errors” principle while avoiding the code clutter that comes with doing so using the standard routines directly.) In addition, this allows valid use of the construct ‘ptr = xrealloc(ptr, new_size)’, which we otherwise warned against earlier. Note that it is not always appropriate to use such a wrapper. If you wish to handle errors yourself, you shouldn’t use it. On the other hand, if running out of memory is always a fatal error, then such a wrapper is quite handy. 97 98 99 100 101 102 103 104 105 106
if (ferror (ebuf->fp)) pfatal_with_name (ebuf->floc.filenm); /* If we found some lines, return how many. If we didn't, but we did find _something_, that indicates we read the last line of a file with no final newline; return 1. If we read nothing, we're at EOF; return -1. */ return nlines ? nlines : p == ebuf->bufstart ? -1 : 1; }
3.2 Memory Allocation
73
Finally, the readline() routine checks for I/O errors, and then returns a descriptive return value. The function pfatal_with_name() (line 98) doesn’t return. 3.2.1.9 GLIBC Only: Reading Entire Lines: getline() and getdelim() Now that you’ve seen how to read an arbitrary-length line, you can breathe a sigh of relief that you don’t have to write such a function for yourself. GLIBC provides two functions to do this for you: #define _GNU_SOURCE 1 #include #include
GLIBC /* for ssize_t */
ssize_t getline(char **lineptr, size_t *n, FILE *stream); ssize_t getdelim(char **lineptr, size_t *n, int delim, FILE *stream);
Defining the constant _GNU_SOURCE brings in the declaration of the getline() and getdelim() functions. Otherwise, they’re implicitly declared as returning int. is needed so you can declare a variable of type ssize_t to hold the return value. (An ssize_t is a “signed size_t.” It’s meant for the same use as a size_t, but for places where you need to be able to hold negative values as well.) Both functions manage dynamic storage for you, ensuring that the buffer containing an input line is always big enough to hold the input line. They differ in that getline() reads until a newline character, and getdelim() uses a user-provided delimiter character. The common arguments are as follows: char **lineptr
A pointer to a char * pointer to hold the address of a dynamically allocated buffer. It should be initialized to NULL if you want getline() to do all the work. Otherwise, it should point to storage previously obtained from malloc(). size_t *n
An indication of the size of the buffer. If you allocated your own buffer, *n should contain the buffer’s size. Both functions update *n to the new buffer size if they change it. FILE *stream
The location from which to get input characters.
Chapter 3 • User-Level Memory Management
74
The functions return -1 upon end-of-file or error. The strings hold the terminating newline or delimiter (if there was one), as well as a terminating zero byte. Using getline() is easy, as shown in ch03-getline.c: /* ch03-getline.c --- demonstrate getline(). */ #define _GNU_SOURCE 1 #include #include /* main --- read a line and echo it back out until EOF. */ int main(void) { char *line = NULL; size_t size = 0; ssize_t ret; while ((ret = getline(& line, & size, stdin)) != -1) printf("(%lu) %s", size, line); return 0; }
Here it is in action, showing the size of the buffer. The third input and output lines are purposely long, to force getline() to grow the buffer; thus, they wrap around: $ ch03-getline Run the program this is a line (120) this is a line And another line. (120) And another line. A llllllllllllllllloooooooooooooooooooooooooooooooonnnnnnnnnnnnnnnnnnngggg gggggggg llliiiiiiiiiiiiiiiiiiinnnnnnnnnnnnnnnnnnnneeeeeeeeee (240) A llllllllllllllllloooooooooooooooooooooooooooooooonnnnnnnnnnnnnnnng nnnggggggggggg llliiiiiiiiiiiiiiiiiiinnnnnnnnnnnnnnnnnnnneeeeeeeeee
3.2.2 String Copying: strdup() One extremely common operation is to allocate storage for a copy of a string. It’s so common that many programs provide a simple function for it instead of using inline code, and often that function is named strdup():
3.2 Memory Allocation
75
#include /* strdup --- malloc() storage for a copy of string and copy it */ char *strdup(const char *str) { size_t len; char *copy; len = strlen(str) + 1; copy = malloc(len);
/* include room for terminating '\0' */
if (copy != NULL) strcpy(copy, str); return copy;
/* returns NULL if error */
}
With the 2001 POSIX standard, programmers the world over can breathe a little easier: This function is now part of POSIX as an XSI extension: XSI
#include char *strdup(const char *str);
Duplicate str
The return value is NULL if there was an error or a pointer to dynamically allocated storage holding a copy of str. The returned value should be freed with free() when it’s no longer needed.
3.2.3 System Calls: brk() and sbrk() The four routines we’ve covered (malloc(), calloc(), realloc(), and free()) are the standard, portable functions to use for dynamic memory management. On Unix systems, the standard functions are implemented on top of two additional, very primitive routines, which directly change the size of a process’s address space. We present them here to help you understand how GNU/Linux and Unix work (“under the hood” again); it is highly unlikely that you will ever need to use these functions in a regular program. They are declared as follows: #include #include
Common /* Necessary for GLIBC 2 systems */
int brk(void *end_data_segment); void *sbrk(ptrdiff_t increment);
Chapter 3 • User-Level Memory Management
76
The brk() system call actually changes the process’s address space. The address is a pointer representing the end of the data segment (really the heap area, as shown earlier in Figure 3.1). Its argument is an absolute logical address representing the new end of the address space. It returns 0 on success or -1 on failure. The sbrk() function is easier to use; its argument is the increment in bytes by which to change the address space. By calling it with an increment of 0, you can determine where the address space currently ends. Thus, to increase your address space by 32 bytes, use code like this: char *p = (char *) sbrk(0); if (brk(p + 32) < 0) { /* handle error */ } /* else, change worked */
/* get current end of address space */
Practically speaking, you would not use brk() directly. Instead, you would use sbrk() exclusively to grow (or even shrink) the address space. (We show how to do this shortly, in Section 3.2.5, “Address Space Examination,” page 78.) Even more practically, you should never use these routines. A program using them can’t then use malloc() also, and this is a big problem, since many parts of the standard library rely on being able to use malloc(). Using brk() or sbrk() is thus likely to lead to hard-to-find program crashes. But it’s worth knowing about the low-level mechanics, and indeed, the malloc() suite of routines is implemented with sbrk() and brk().
3.2.4 Lazy Programmer Calls: alloca() “Danger, Will Robinson! Danger!” —The Robot—
There is one additional memory allocation function that you should know about. We discuss it only so that you’ll understand it when you see it, but you should not use it in new programs! This function is named alloca(); it’s declared as follows: /* Header on GNU/Linux, possibly not all Unix systems */ #include void *alloca(size_t size);
Common
3.2 Memory Allocation
77
The alloca() function allocates size bytes from the stack. What’s nice about this is that the allocated storage disappears when the function returns. There’s no need to explicitly free it because it goes away automatically, just as local variables do. At first glance, alloca() seems like a programming panacea; memory can be allocated that doesn’t have to be managed at all. Like the Dark Side of the Force, this is indeed seductive. And it is similarly to be avoided, for the following reasons: • The function is nonstandard; it is not included in any formal standard, either ISO C or POSIX. • The function is not portable. Although it exists on many Unix systems and GNU/Linux, it doesn’t exist on non-Unix systems. This is a problem, since it’s often important for code to be multiplatform, above and beyond just Linux and Unix. • On some systems, alloca() can’t even be implemented. All the world is not an Intel x86 processor, nor is all the world GCC. • Quoting the manpage (emphasis added): “The alloca function is machine and compiler dependent. On many systems its implementation is buggy. Its use is discouraged.” • Quoting the manpage again: “On many systems alloca cannot be used inside the list of arguments of a function call, because the stack space reserved by alloca would appear on the stack in the middle of the space for the function arguments.” • It encourages sloppy coding. Careful and correct memory management isn’t hard; you just to have to think about what you’re doing and plan ahead. GCC generally uses a built-in version of the function that operates by using inline code. As a result, there are other consequences of alloca(). Quoting again from the manpage: The fact that the code is inlined means that it is impossible to take the address of this function, or to change its behavior by linking with a different library. The inlined code often consists of a single instruction adjusting the stack pointer, and does not check for stack overflow. Thus, there is no NULL error return.
Chapter 3 • User-Level Memory Management
78
The manual page doesn’t go quite far enough in describing the problem with GCC’s built-in alloca(). If there’s a stack overflow, the return value is garbage. And you have no way to tell! This flaw makes GCC’s alloca() impossible to use in robust code. All of this should convince you to stay away from alloca() for any new code that you may write. If you’re going to have to write portable code using malloc() and free() anyway, there’s no reason to also write code using alloca().
3.2.5 Address Space Examination The following program, ch03-memaddr.c, summarizes everything we’ve seen about the address space. It does many things that you should not do in practice, such as call alloca() or use brk() and sbrk() directly: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33
/* * ch03-memaddr.c --- Show address of code, data and stack sections, * as well as BSS and dynamic memory. */ #include #include #include #include
/* for definition of ptrdiff_t on GLIBC */
extern void afunc(void);
/* for demonstration only */
int bss_var; int data_var = 42;
/* a function for showing stack growth */ /* auto init to 0, should be in BSS */ /* init to nonzero, should be data */
int main(int argc, char **argv) /* arguments aren't used */ { char *p, *b, *nb; printf("Text Locations:\n"); printf("\tAddress of main: %p\n", main); printf("\tAddress of afunc: %p\n", afunc); printf("Stack Locations:\n"); afunc(); p = (char *) alloca(32); if (p != NULL) { printf("\tStart of alloca()'ed array: %p\n", p); printf("\tEnd of alloca()'ed array: %p\n", p + 31); }
3.2 Memory Allocation
34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63
79
printf("Data Locations:\n"); printf("\tAddress of data_var: %p\n", & data_var); printf("BSS Locations:\n"); printf("\tAddress of bss_var: %p\n", & bss_var); b = sbrk((ptrdiff_t) 32); /* grow address space */ nb = sbrk((ptrdiff_t) 0); printf("Heap Locations:\n"); printf("\tInitial end of heap: %p\n", b); printf("\tNew end of heap: %p\n", nb); b = sbrk((ptrdiff_t) -16); /* shrink it */ nb = sbrk((ptrdiff_t) 0); printf("\tFinal end of heap: %p\n", nb); } void afunc(void) { static int level = 0; auto int stack_var; if (++level == 3) return;
/* recursion level */ /* automatic variable, on stack */ /* avoid infinite recursion */
printf("\tStack level %d: address of stack_var: %p\n", level, & stack_var); afunc(); /* recursive call */ }
This program prints the locations of the two functions main() and afunc() (lines 22–23). It then shows how the stack grows downward, letting afunc() (lines 51–63) print the address of successive instantiations of its local variable stack_var. (stack_var is purposely declared auto, to emphasize that it’s on the stack.) It then shows the location of memory allocated by alloca() (lines 28–32). Finally it prints the locations of data and BSS variables (lines 34–38), and then of memory allocated directly through sbrk() (lines 40–48). Here are the results when the program is run on an Intel GNU/Linux system: $ ch03-memaddr Text Locations: Address of main: 0x804838c Address of afunc: 0x80484a8 Stack Locations: Stack level 1: address of stack_var: 0xbffff864 Stack level 2: address of stack_var: 0xbffff844 Start of alloca()'ed array: 0xbffff860 End of alloca()'ed array: 0xbffff87f
Stack grows downward Addresses are on the stack
Chapter 3 • User-Level Memory Management
80
Data Locations: Address of data_var: 0x80496b8 BSS Locations: Address of bss_var: 0x80497c4 Heap Locations: Initial end of heap: 0x80497c8 New end of heap: 0x80497e8 Final end of heap: 0x80497d8
BSS is above data variables Heap is immediately above BSS And grows upward Address spaces can shrink
3.3 Summary • Every Linux (and Unix) program has different memory areas. They are stored in separate parts of the executable program’s disk file. Some of the sections are loaded into the same part of memory when the program is run. All running copies of the same program share the executable code (the text segment). The size program shows the sizes of the different areas for relocatable object files and fully linked executable files. • The address space of a running program may have holes in it, and the size of the address space can change as memory is allocated and released. On modern systems, address 0 is not part of the address space, so don’t attempt to dereference NULL pointers. • At the C level, memory is allocated or reallocated with one of malloc(), calloc(), or realloc(). Memory is freed with free(). (Although realloc() can do everything, using it that way isn’t recommended). It is unusual for freed memory to be removed from the address space; instead, it is reused for later allocations. • Extreme care must be taken to • Free only memory received from the allocation routines, • Free such memory once and only once, • Free unused memory, and • Not “leak” any dynamically allocated memory.
• POSIX provides the strdup() function as a convenience, and GLIBC provides getline() and getdelim() for reading arbitrary-length lines.
3.4 Exercises
81
• The low-level system call interface functions, brk() and sbrk(), provide direct but primitive access to memory allocation and deallocation. Unless you are writing your own storage allocator, you should not use them. • The alloca() function for allocating memory on the stack exists, but is not recommended. Like being able to recognize poison ivy, you should know it only so that you’ll know to avoid it.
Exercises 1. Starting with the structure— struct line { size_t buflen; char *buf; FILE *fp; };
—write your own readline() function that will read an any-length line. Don’t worry about backslash continuation lines. Instead of using fgets() to read lines, use getc() to read characters one at a time. 2. Does your function preserve the terminating newline? Explain why or why not. 3. How does your function handle lines that end in CR-LF? 4. How do you initialize the structure? With a separate routine? With a documented requirement for specific values in the structure? 5. How do you indicate end-of-file? How do you indicate that an I/O error has occurred? For errors, should your function print an error message? Explain why or why not. 6. Write a program that uses your function to test it, and another program to generate input data to the first program. Test your function. 7. Rewrite your function to use fgets() and test it. Is the new code more complex or less complex? How does its performance compare to the getc() version? 8. Study the V7 end(3) manpage (/usr/man/man3/end.3 in the V7 distribution). Does it shed any light on how ‘sbrk(0)’ might work? 9. Enhance ch03-memaddr.c to print out the location of the arguments and the environment. In which part of the address space do they reside?
This page intentionally left blank
4 Files and File I/O
In this chapter 4.1 Introducing the Linux/Unix I/O Model
page 84
4.2 Presenting a Basic Program Structure
page 84
4.3 Determining What Went Wrong
page 86
4.4 Doing Input and Output
page 91
4.5 Random Access: Moving Around within a File
page 102
4.6 Creating Files
page 106
4.7 Forcing Data to Disk
page 113
4.8 Setting File Length
page 114
4.9 Summary
page 115
Exercises
page 115
83
his chapter describes basic file operations: opening and creating files, reading and writing them, moving around in them, and closing them. Along the way it presents the standard mechanisms for detecting and reporting errors. The chapter ends off by describing how to set a file’s length and force file data and metadata to disk.
T
4.1 Introducing the Linux/Unix I/O Model The Linux/Unix API model for I/O is straightforward. It can be summed up in four words: open, read, write, close. In fact, those are the names of the system calls: open(), read(), write(), close(). Here are their declarations: #include #include #include #include
POSIX
/* for mode_t */ /* for flags for open() */ /* for ssize_t */
int open(const char *pathname, int flags, mode_t mode); ssize_t read(int fd, void *buf, size_t count); ssize_t write(int fd, const void *buf, size_t count); int close(int fd);
In the next and subsequent sections, we illustrate the model by writing a very simple version of cat. It’s so simple that it doesn’t even have options; all it does is concatenate the contents of the named files to standard output. It does do minimal error reporting. Once it’s written, we compare it to the V7 cat. We present the program top-down, starting with the command line. In succeeding sections, we present error reporting and then get down to brass tacks, showing how to do actual file I/O.
4.2 Presenting a Basic Program Structure Our version of cat follows a structure that is generally useful. The first part starts with an explanatory comment, header includes, declarations, and the main() function: 1 2 3 4 5 6 7 8 9
84
/* * ch04-cat.c --- Demonstrate open(), read(), write(), close(), * errno and strerror(). */ #include #include #include #include
/* /* /* /*
for fprintf(), stderr, BUFSIZ */ declare errno */ for flags for open() */ declare strerror() */
4.2 Presenting a Basic Program Structure
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34
#include #include #include
85
/* for ssize_t */ /* for mode_t */
char *myname; int process(char *file); /* main --- loop over file arguments */ int main(int argc, char **argv) { int i; int errs = 0; myname = argv[0]; if (argc == 1) errs = process("-"); else for (i = 1; i < argc; i++) errs += process(argv[i]); return (errs != 0); }
… continued later in the chapter …
The myname variable (line 14) is used later for error messages; main() sets it to the program name (argv[0]) as its first action (line 25). Then main() loops over the arguments. For each argument, it calls a function named process() to do the work. When given the filename - (a single dash, or minus sign), Unix cat reads standard input instead of trying to open a file named -. In addition, with no arguments, cat reads standard input. ch04-cat implements both of these behaviors. The check for ‘argc == 1’ (line 27) is true when there are no filename arguments; in this case, main() passes "-" to process(). Otherwise, main() loops over all the arguments, treating them as files to be processed. If one of them happens to be "-", the program then processes standard input. If process() returns a nonzero value, it means that something went wrong. Errors are added up in the errs variable (lines 28 and 31). When main() ends, it returns 0 if there were no errors, and 1 if there were (line 33). This is a fairly standard convention, whose meaning is discussed in more detail in Section 9.1.5.1, “Defining Process Exit Status,” page 300.
Chapter 4 • Files and File I/O
86
The structure presented in main() is quite generic: process() could do anything we want to the file. For example (ignoring the special use of "-"), process() could just as easily remove files as concatenate them! Before looking at the process() function, we have to describe how system call errors are represented and then how I/O is done. The process() function itself is presented in Section 4.4.3, “Reading and Writing,” page 96.
4.3 Determining What Went Wrong “If anything can go wrong, it will.” —Murphy's Law— “Be prepared.” —The Boy Scouts—
Errors can occur anytime. Disks can fill up, users can enter invalid data, the server on a network from which a file is being read can crash, the network can die, and so on. It is important to always check every operation for success or failure. The basic Linux system calls almost universally return -1 on error, and 0 or a positive value on success. This lets you know that the operation has succeeded or failed: int result; result = some_system_call(param1, param2); if (result < 0) { /* error occurred, do something */ } else /* all ok, proceed */
Knowing that an error occurred isn’t enough. It’s necessary to know what error occurred. For that, each process has a predefined variable named errno. Whenever a system call fails, errno is set to one of a set of predefined error values. errno and the predefined values are declared in the header file: #include
ISO C
extern int errno;
errno itself may be a macro that acts like an int variable; it need not be a real integer.
In particular, in threaded environments, each thread will have its own private version of errno. Practically speaking, though, for all the system calls and functions in this book, you can treat errno like a simple int.
4.3 Determining What Went Wrong
87
4.3.1 Values for errno The 2001 POSIX standard defines a large number of possible values for errno. Many of these are related to networking, IPC, or other specialized tasks. The manpage for each system call describes the possible errno values that can occur; thus, you can write code to check for particular errors and handle them specially if need be. The possible values are defined by symbolic constants. Table 4.1 lists the constants provided by GLIBC. TABLE 4.1 GLIBC values for errno
Name
Meaning
E2BIG
Argument list too long.
EACCES
Permission denied.
EADDRINUSE
Address in use.
EADDRNOTAVAIL Address not available. EAFNOSUPPORT
Address family not supported.
EAGAIN
Resource unavailable, try again (may be the same value as EWOULDBLOCK).
EALREADY
Connection already in progress.
EBADF
Bad file descriptor.
EBADMSG
Bad message.
EBUSY
Device or resource busy.
ECANCELED
Operation canceled.
ECHILD
No child processes.
ECONNABORTED
Connection aborted.
ECONNREFUSED
Connection refused.
ECONNRESET
Connection reset.
EDEADLK
Resource deadlock would occur.
EDESTADDRREQ
Destination address required.
EDOM
Mathematics argument out of domain of function.
EDQUOT
Reserved.
EEXIST
File exists.
Chapter 4 • Files and File I/O
88
TABLE 4.1 (Continued)
Name
Meaning
EFAULT
Bad address.
EFBIG
File too large.
EHOSTUNREACH
Host is unreachable.
EIDRM
Identifier removed.
EILSEQ
Illegal byte sequence.
EINPROGRESS
Operation in progress.
EINTR
Interrupted function.
EINVAL
Invalid argument.
EIO
I/O error.
EISCONN
Socket is connected.
EISDIR
Is a directory.
ELOOP
Too many levels of symbolic links.
EMFILE
Too many open files.
EMLINK
Too many links.
EMSGSIZE
Message too large.
EMULTIHOP
Reserved.
ENAMETOOLONG
Filename too long.
ENETDOWN
Network is down.
ENETRESET
Connection aborted by network.
ENETUNREACH
Network unreachable.
ENFILE
Too many files open in system.
ENOBUFS
No buffer space available.
ENODEV
No such device.
ENOENT
No such file or directory.
ENOEXEC
Executable file format error.
ENOLCK
No locks available.
ENOLINK
Reserved.
ENOMEM
Not enough space.
4.3 Determining What Went Wrong
TABLE 4.1 (Continued)
Name
Meaning
ENOMSG
No message of the desired type.
ENOPROTOOPT
Protocol not available.
ENOSPC
No space left on device.
ENOSYS
Function not supported.
ENOTCONN
The socket is not connected.
ENOTDIR
Not a directory.
ENOTEMPTY
Directory not empty.
ENOTSOCK
Not a socket.
ENOTSUP
Not supported.
ENOTTY
Inappropriate I/O control operation.
ENXIO
No such device or address.
EOPNOTSUPP
Operation not supported on socket.
EOVERFLOW
Value too large to be stored in data type.
EPERM
Operation not permitted.
EPIPE
Broken pipe.
EPROTO
Protocol error.
EPROTONOSUPPORT
Protocol not supported.
EPROTOTYPE
Protocol wrong type for socket.
ERANGE
Result too large.
EROFS
Read-only file system.
ESPIPE
Invalid seek.
ESRCH
No such process.
ESTALE
Reserved.
ETIMEDOUT
Connection timed out.
ETXTBSY
Text file busy.
EWOULDBLOCK
Operation would block (may be the same value as EAGAIN).
EXDEV
Cross-device link.
89
Chapter 4 • Files and File I/O
90
Many systems provide other error values as well, and older systems may not have all the errors just listed. You should check your local intro(2) and errno(2) manpages for the full story. NOTE errno should be examined only after an error has occurred and before further system calls are made. Its initial value is 0. However, nothing changes errno between errors, meaning that a successful system call does not reset it to 0. You can, of course, manually set it to 0 initially or whenever you like, but this is rarely done.
Initially, we use errno only for error reporting. There are two useful functions for error reporting. The first is perror(): #include
ISO C
void perror(const char *s);
The perror() function prints a program-supplied string, followed by a colon, and then a string describing the value of errno: if (some_system_call(param1, param2) < 0) { perror("system call failed"); return 1; }
We prefer the strerror() function, which takes an error value parameter and returns a pointer to a string describing the error: #include
ISO C
char *strerror(int errnum);
strerror() provides maximum flexibility in error reporting, since fprintf() makes it possible to print the error in any way we like: if (some_system_call(param1, param2) < 0) { fprintf(stderr, "%s: %d, %d: some_system_call failed: %s\n", argv[0], param1, param2, strerror(errno)); return 1; }
You will see many examples of both functions throughout the book.
4.3.2 Error Message Style C provides several special macros for use in error reporting. The most widely used are __FILE__ and __LINE__, which expand to the name of the source file and the
4.4 Doing Input and Output
91
current line number in that file. These have been available in C since its beginning. C99 defines an additional predefined identifier, __func__, which represents the name of the current function as a character string. The macros are used like this: if (some_system_call(param1, param2) < 0) { fprintf(stderr, "%s: %s (%s %d): some_system_call(%d, %d) failed: %s\n", argv[0], _ _func_ _, _ _FILE_ _, _ _LINE_ _, param1, param2, strerror(errno)); return 1; }
Here, the error message includes not only the program’s name but also the function name, source file name, and line number. The full list of identifiers useful for diagnostics is provided in Table 4.2. TABLE 4.2 C99 diagnostic identifiers
Identifier
C version
Meaning
_ _DATE_ _
C89
Date of compilation in the form "Mmm nn yyyy".
_ _FILE_ _
Original
Source-file name in the form "program.c".
_ _LINE_ _
Original
Source-file line number in the form 42.
_ _TIME_ _
C89
Time of compilation in the form "hh:mm:ss".
_ _func_ _
C99
Name of current function, as if declared const char _ _func_ _[] = "name".
The use of __FILE__ and __LINE__ was quite popular in the early days of Unix, when most people had source code and could find the error and fix it. As Unix systems became more commercial, use of these identifiers gradually diminished, since knowing the source code location isn’t of much help to someone who only has a binary executable. Today, although GNU/Linux systems come with source code, said source code often isn’t installed by default. Thus, using these identifiers for error messages doesn’t seem to provide much additional value. The GNU Coding Standards don’t even mention them.
4.4 Doing Input and Output All I/O in Linux is accomplished through file descriptors. This section introduces file descriptors, describes how to obtain and release them, and explains how to do I/O with them.
Chapter 4 • Files and File I/O
92
4.4.1 Understanding File Descriptors A file descriptor is an integer value. Valid file descriptors start at 0 and go up to some system-defined limit. These integers are in fact simple indexes into each process’s table of open files. (This table is maintained inside the operating system; it is not accessible to a running program.) On most modern systems, the size of the table is large. The command ‘ulimit -n’ prints the value: $ ulimit -n 1024
From C, the maximum number of open files is returned by the getdtablesize() (get descriptor table size) function: Common
#include int getdtablesize(void);
This small program prints the result of the function: /* ch04-maxfds.c --- Demonstrate getdtablesize(). */ #include #include
/* for fprintf(), stderr, BUFSIZ */ /* for ssize_t */
int main(int argc, char **argv) { printf("max fds: %d\n", getdtablesize()); exit(0); }
When compiled and run, not surprisingly the program prints the same value as printed by ulimit: $ ch04-maxfds max fds: 1024
File descriptors are held in normal int variables; it is typical to see declarations of the form ‘int fd’ for use with I/O system calls. There is no predefined type for file descriptors. In the usual case, every program starts running with three file descriptors already opened for it. These are standard input, standard output, and standard error, on file descriptors 0, 1, and 2, respectively. (If not otherwise redirected, each one is connected to your keyboard and screen.)
4.4 Doing Input and Output
93
Obvious Manifest Constants. An Oxymoron? When working with file-descriptor-based system calls and the standard input, output and error, it is common practice to use the integer constants 0, 1, and 2 directly in code. In the overwhelming majority of cases, such manifest constants are a bad idea. You never know what the meaning is of some random integer constant and whether the same constant used elsewhere is related to it or not. To this end, the POSIX standard requires the definition of the following symbolic constants in : STDIN_FILENO
The “file number” for standard input: 0.
STDOUT_FILENO
The file number for standard output: 1.
STDERR_FILENO
The file number for standard error: 2.
However, in our humble opinion, using these macros is overkill. First, it’s painful to type 12 or 13 characters instead of just 1. Second, the use of 0, 1, and 2 is so standard and so well known that there’s really no grounds for confusion as to the meaning of these particular manifest constants. On the other hand, use of these constants leaves no doubt as to what was intended. Consider this statement: int fd = 0;
Is fd being initialized to refer to standard input, or is the programmer being careful to initialize his variables to a reasonable value? You can’t tell. One approach (as recommended by Geoff Collyer) is to use the following enum definition: enum { Stdin, Stdout, Stderr };
These constants can then be used in place of 0, 1, and 2. They are both readable and easier to type.
4.4.2 Opening and Closing Files New file descriptors are obtained (among other sources) from the open() system call. This system call opens a file for reading or writing and returns a new file descriptor for subsequent operations on the file. We saw the declaration earlier: #include #include #include #include
int open(const char *pathname, int flags, mode_t mode);
The three arguments are as follows:
POSIX
Chapter 4 • Files and File I/O
94
const char *pathname
A C string, representing the name of the file to open. int flags
The bitwise-OR of one or more of the constants defined in . We describe them shortly. mode_t mode
The permissions mode of a file being created. This is discussed later in the chapter, see Section 4.6, “Creating Files,” page 106. When opening an existing file, omit this parameter.1 The return value from open() is either the new file descriptor or -1 to indicate an error, in which case errno will be set. For simple I/O, the flags argument should be one of the values in Table 4.3. TABLE 4.3 Flag values for open()
Symbolic constant
Value
Meaning
O_RDONLY
0
Open file only for reading; writes will fail.
O_WRONLY
1
Open file only for writing; reads will fail.
O_RDWR
2
Open file for reading and writing.
We will see example code shortly. Additional values for flags are described in Section 4.6, “Creating Files,” page 106. Much early Unix code didn’t use the symbolic values. Instead, the numeric value was used. Today this is considered bad practice, but we present the values so that you’ll recognize their meanings if you see them. The close() system call closes a file: The entry for it in the system’s file descriptor table is marked as unused, and no further operations may be done with that file descriptor. The declaration is #include int close(int fd);
1 open() is one of the few variadic system calls.
POSIX
4.4 Doing Input and Output
95
The return value is 0 on success, -1 on error. There isn’t much you can do if an error does occur, other than report it. Errors closing files are unusual, but not unheard of, particularly for files being accessed over a network. Thus, it’s good practice to check the return value, particularly for files opened for writing. If you choose to ignore the return value, specifically cast it to void, to signify that you don’t care about the result: (void) close(fd);
/* throw away return value */
The flip side of this advice is that too many casts to void tend to the clutter the code. For example, despite the “always check the return value” principle, it’s exceedingly rare to see code that checks the return value of printf() or bothers to cast it to void. As with many aspects of C programming, experience and judgment should be applied here too. As mentioned, the number of open files, while large, is limited, and you should always close files when you’re done with them. If you don’t, you will eventually run out of file descriptors, a situation that leads to a lack of robustness on the part of your program. The system closes all open files when a process exits, but—except for 0, 1, and 2—it’s bad form to rely on this. When open() returns a new file descriptor, it always returns the lowest unused integer value. Always. Thus, if file descriptors 0–6 are open and the program closes file descriptor 5, then the next call to open() returns 5, not 7. This behavior is important; we see later in the book how it’s used to cleanly implement many important Unix features, such as I/O redirection and piping. 4.4.2.1 Mapping FILE * Variables to File Descriptors The Standard I/O library functions and FILE * variables from , such as stdin, stdout, and stderr, are built on top of the file-descriptor-based system calls. Occasionally, it’s useful to directly access the file descriptor associated with a file pointer if you need to do something not defined by the ISO C standard. The fileno() function returns the underlying file descriptor: #include
POSIX
int fileno(FILE *stream);
We will see an example later, in Section 4.4.4, “Example: Unix cat,” page 99.
Chapter 4 • Files and File I/O
96
4.4.2.2 Closing All Open Files Open files are inherited by child processes from their parent processes. They are, in effect, shared. In particular, the position in the file is shared. We leave the details for discussion later, in Section 9.1.1.2, “File Descriptor Sharing,” page 286. Since programs can inherit open files, you may occasionally see programs that close all their files in order to start out with a “clean slate.” In particular, code like this is typical: int i; /* leave 0, 1, and 2 alone */ for (i = 3; i < getdtablesize(); i++) (void) close(i);
Assume that the result of getdtablesize() is 1024. This code works, but it makes (1024 – 3) * 2 = 2042 system calls. 1020 of them are needless, since the return value from getdtablesize() doesn’t change. Here is a better way to write this code: int i, fds; for (i = 3, fds = getdtablesize(); i < fds; i++) (void) close(i);
Such an optimization does not affect the readability of the code, and it can make a difference, particularly on slow systems. In general, it’s worth looking for cases in which loops compute the same result repeatedly, to see if such a computation can’t be pulled out of the loop. In all such cases, though, be sure that you (a) preserve the code’s correctness and (b) preserve its readability!
4.4.3 Reading and Writing I/O is accomplished with the read() and write() system calls, respectively: #include #include #include #include
ssize_t read(int fd, void *buf, size_t count); ssize_t write(int fd, const void *buf, size_t count);
POSIX
4.4 Doing Input and Output
97
Each function is about as simple as can be. The arguments are the file descriptor for the open file, a pointer to a buffer to read data into or to write data from, and the number of bytes to read or write. The return value is the number of bytes actually read or written. (This number can be smaller than the requested amount: For a read operation this happens when fewer than count bytes are left in the file, and for a write operation it happens if a disk fills up or some other error occurs.) The return value is -1 if an error occurred, in which case errno indicates the error. When read() returns 0, it means that end-of-file has been reached. We can now show the rest of the code for ch04-cat. The process() routine uses 0 if the input filename is "-", for standard input (lines 50 and 51). Otherwise, it opens the given file: 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56
/* * process --- do something with the file, in this case, * send it to stdout (fd 1). * Returns 0 if all OK, 1 otherwise. */ int process(char *file) { int fd; ssize_t rcount, wcount; char buffer[BUFSIZ]; int errors = 0; if (strcmp(file, "-") == 0) fd = 0; else if ((fd = open(file, O_RDONLY)) < 0) { fprintf(stderr, "%s: %s: cannot open for reading: %s\n", myname, file, strerror(errno)); return 1; }
The buffer buffer (line 47) is of size BUFSIZ; this constant is defined by to be the “optimal” block size for I/O. Although the value for BUFSIZ varies across systems, code that uses this constant is clean and portable. The core of the routine is the following loop, which repeatedly reads data until either end-of-file or an error is encountered:
Chapter 4 • Files and File I/O
98
58 59 60 61 62 63 64 65 66
while ((rcount = read(fd, buffer, sizeof buffer)) > 0) { wcount = write(1, buffer, rcount); if (wcount != rcount) { fprintf(stderr, "%s: %s: write error: %s\n", myname, file, strerror(errno)); errors++; break; } }
The rcount and wcount variables (line 45) are of type ssize_t, “signed size_t,” which allows them to hold negative values. Note that the count value passed to write() is the return value from read() (line 59). While we want to read fixed-size BUFSIZ chunks, it is unlikely that the file itself is a multiple of BUFSIZ bytes big. When the final, smaller, chunk of bytes is read from the file, the return value indicates how many bytes of buffer received new data. Only those bytes should be copied to standard output, not the entire buffer. The test ‘wcount != rcount’ on line 60 is the correct way to check for write errors; if some, but not all, of the data were written, then wcount will be positive but smaller than rcount. Finally, process() checks for read errors (lines 68–72) and then attempts to close the file. In the (unlikely) event that close() fails (line 75), it prints an error message. Avoiding the close of standard input isn’t strictly necessary in this program, but it’s a good habit to develop for writing larger programs, in case other code elsewhere wants to do something with it or if a child program will inherit it. The last statement (line 82) returns 1 if there were errors, 0 otherwise. 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83
if (rcount < 0) { fprintf(stderr, "%s: %s: read error: %s\n", myname, file, strerror(errno)); errors++; } if (fd != 0) { if (close(fd) < 0) { fprintf(stderr, "%s: %s: close error: %s\n", myname, file, strerror(errno)); errors++; } } return (errors != 0); }
4.4 Doing Input and Output
99
ch04-cat checks every system call for errors. While this is tedious, it provides robustness (or at least clarity): When something goes wrong, ch04-cat prints an error message that is as specific as possible. The combination of errno and strerror() makes this easy to do. That’s it for ch04-cat, only 88 lines of code!
To sum up, there are several points to understand about Unix I/O: I/O is uninterpreted. The I/O system calls merely move bytes around. They do no interpretation of the data; all interpretation is up to the user-level program. This makes reading and writing binary structures just as easy as reading and writing lines of text (easier, really, although using binary data introduces portability problems). I/O is flexible. You can read or write as many bytes at a time as you like. You can even read and write data one byte at a time, although doing so for large amounts of data is more expensive that doing so in large chunks. I/O is simple. The three-valued return (negative for error, zero for end-of-file, positive for a count) makes programming straightforward and obvious. I/O can be partial. Both read() and write() can transfer fewer bytes than requested. Application code (that is, your code) must always be aware of this.
4.4.4 Example: Unix cat As promised, here is the V7 version of cat.2 It begins by checking for options. The V7 cat accepts a single option, -u, for doing unbuffered output. The basic design is similar to the one shown above; it loops over the files named by the command-line arguments and reads each file, one character at a time, sending the characters to standard output. Unlike our version, it uses the facilities. In many ways code using the Standard I/O library is easier to read and write, since all buffering issues are hidden by the library.
2 See /usr/src/cmd/cat.c in the V7 distribution. The program compiles without change under GNU/Linux.
Chapter 4 • Files and File I/O
100
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49
/* * Concatenate files. */ #include #include #include char
stdbuf[BUFSIZ];
main(argc, argv) char **argv; { int fflg = 0; register FILE *fi; register c; int dev, ino = -1; struct stat statb;
int main(int argc, char **argv)
setbuf(stdout, stdbuf); for( ; argc>1 && argv[1][0]=='-'; argc--,argv++) { switch(argv[1][1]) { Process options case 0: break; case 'u': setbuf(stdout, (char *)NULL); continue; } break; } fstat(fileno(stdout), &statb); Lines 31–36 explained in Chapter 5 statb.st_mode &= S_IFMT; if (statb.st_mode!=S_IFCHR && statb.st_mode!=S_IFBLK) { dev = statb.st_dev; ino = statb.st_ino; } if (argc < 2) { argc = 2; fflg++; } while (--argc > 0) { Loop over files if (fflg || (*++argv)[0]=='-' && (*argv)[1]=='\0') fi = stdin; else { if ((fi = fopen(*argv, "r")) == NULL) { fprintf(stderr, "cat: can't open %s\n", *argv); continue; } }
4.4 Doing Input and Output
50 51 52 53 54 55 56 57 58 59 60 61 62 63
101
fstat(fileno(fi), &statb); Lines 50–56 explained in Chapter 5 if (statb.st_dev==dev && statb.st_ino==ino) { fprintf(stderr, "cat: input %s is output\n", fflg?"-": *argv); fclose(fi); continue; } Copy file contents to stdout while ((c = getc(fi)) != EOF) putchar(c); if (fi!=stdin) fclose(fi); } return(0); }
Of note is that the program always exits successfully (line 62); it could have been written to note errors and indicate them in main()’s return value. (The mechanics of process exiting and the meaning of different exit status values are discussed in Section 9.1.5.1, “Defining Process Exit Status,” page 300.) The code dealing with the struct stat and the fstat() function (lines 31–36 and 50–56) is undoubtedly opaque, since we haven’t yet covered these functions, and won’t until the next chapter. (But do note the use of fileno() on line 50 to get at the underlying file descriptor associated with the FILE * variables.) The idea behind the code is to make sure that no input file is the same as the output file. This is intended to prevent infinite file growth, in case of a command like this: $ cat myfile >> myfile
Append one copy of myfile onto itself?
And indeed, the check works: $ echo hi > myfile $ v7cat myfile >> myfile cat: input myfile is output
Create a file Attempt to append it onto itself
If you try this with ch04-cat, it will keep running, and myfile will keep growing until you interrupt it. The GNU version of cat does perform the check. Note that something like the following is beyond cat’s control: $ v7cat < myfile > myfile cat: input - is output $ ls -l myfile -rw-r--r-1 arnold devel
0 Mar 24 14:17 myfile
In this case, it’s too late because the shell truncated myfile (with the > operator) before cat ever gets a chance to examine the file! In Section 5.4.4.2, “The V7 cat Revisited,” page 150, we explain the struct stat code.
Chapter 4 • Files and File I/O
102
4.5 Random Access: Moving Around within a File So far, we have discussed sequential I/O, whereby data are read or written beginning at the front of the file and continuing until the end. Often, this is all a program needs to do. However, it is possible to do random access I/O; that is, read data from an arbitrary position in the file, without having to read everything before that position first. The offset of a file descriptor is the position within an open file at which the next read or write will occur. A program sets the offset with the lseek() system call: #include #include
/* for off_t */ POSIX /* declares lseek() and whence values */
off_t lseek(int fd, off_t offset, int whence);
The type off_t (offset type) is a signed integer type representing byte positions (offsets from the beginning) within a file. On 32-bit systems, the type is usually a long. However, many modern systems allow very large files, in which case off_t may be a more unusual type, such as a C99 int64_t or some other extended type. lseek() takes three arguments, as follows: int fd
The file descriptor for the open file. off_t offset
A position to which to move. The interpretation of this value depends on the whence parameter. offset can be positive or negative: Negative values move toward the front of the file; positive values move toward the end of the file. int whence
Describes the location in the file to which offset is relative. See Table 4.4. TABLE 4.4 whence values for lseek()
Symbolic constant Value Meaning SEEK_SET
0
offset is absolute, that is, relative to the beginning of the
file. SEEK_CUR
1
offset is relative to the current position in the file.
SEEK_END
2
offset is relative to the end of the file.
4.5 Random Access: Moving Around within a File
103
Much old code uses the numeric values shown in Table 4.4. However, any new code you write should use the symbolic values, whose meanings are clearer. The meaning of the values and their effects upon file position are shown in Figure 4.1. Assuming that the file has 3000 bytes and that the current offset is 2000 before each call to lseek(), the new position after each call is as shown: File start: 0
Current: 2000
File end: offset 3000
New position: lseek(fd, (off_t) 40, SEEK_END);
3040
lseek(fd, (off_t) -40, SEEK_END);
2960
lseek(fd, (off_t) 40, SEEK_CUR);
2040 1960 40
lseek(fd, (off_t) -40, SEEK_CUR); lseek(fd, (off_t) 40, SEEK_SET);
FIGURE 4.1 Offsets for lseek()
Negative offsets relative to the beginning of the file are meaningless; they fail with an “invalid argument” error. The return value is the new position in the file. Thus, to find out where in the file you are, use off_t curpos; ... curpos = lseek(fd, (off_t) 0, SEEK_CUR);
The l in lseek() stands for long. lseek() was introduced in V7 Unix when file sizes were extended; V6 had a simple seek() system call. As a result, much old documentation (and code) treats the offset parameter as if it had type long, and instead of a cast to off_t, it’s not unusual to see an L suffix on constant offset values: curpos = lseek(fd, 0L, SEEK_CUR);
On systems with a Standard C compiler, where lseek() is declared with a prototype, such old code continues to work since the compiler automatically promotes the 0L from long to off_t if they are different types. One interesting and important aspect of lseek() is that it is possible to seek beyond the end of a file. Any data that are subsequently written at that point go into the file,
Chapter 4 • Files and File I/O
104
but with a “gap” or “hole” between the data at the previous end of the file and the new data. Data in the gap read as if they are all zeros. The following program demonstrates the creation of holes. It writes three instances of a struct at the beginning, middle, and far end of a file. The offsets chosen (lines 16–18, the third element of each structure) are arbitrary but big enough to demonstrate the point: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39
/* ch04-holes.c --- Demonstrate lseek() and holes in files. */ #include #include #include #include #include #include #include
/* /* /* /* /* /* /*
for fprintf(), stderr, BUFSIZ */ declare errno */ for flags for open() */ declare strerror() */ for ssize_t */ for off_t, etc. */ for mode_t */
struct person { char name[10]; /* first name */ char id[10]; /* ID number */ off_t pos; /* position in file, for demonstration */ } people[] = { { "arnold", "123456789", 0 }, { "miriam", "987654321", 10240 }, { "joe", "192837465", 81920 }, }; int main(int argc, char **argv) { int fd; int i, j; if (argc < 2) { fprintf(stderr, "usage: %s file\n", argv[0]); return 1; } fd = open(argv[1], O_RDWR|O_CREAT|O_TRUNC, 0666); if (fd < 0) { fprintf(stderr, "%s: %s: cannot open for read/write: %s\n", argv[0], argv[1], strerror(errno)); return 1; } j = sizeof(people) / sizeof(people[0]);
/* count of elements */
Lines 27–30 make sure that the program was invoked properly. Lines 32–37 open the named file and verify that the open succeeded.
4.5 Random Access: Moving Around within a File
105
The calculation on line 39 of j, the array element count, uses a lovely, portable trick: The number of elements is the size of the entire array divided by the size of the first element. The beauty of this idiom is that it’s always right: No matter how many elements you add to or remove from such an array, the compiler will figure it out. It also doesn’t require a terminating sentinel element; that is, one in which all the fields are set to zero, NULL, or some such. The work is done by a loop (lines 41–55), which seeks to the byte offset given in each structure (line 42) and then writes the structure out (line 49): 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
for (i = 0; i < j; i++) { if (lseek(fd, people[i].pos, SEEK_SET) < 0) { fprintf(stderr, "%s: %s: seek error: %s\n", argv[0], argv[1], strerror(errno)); (void) close(fd); return 1; } if (write(fd, &people[i], sizeof(people[i])) != sizeof(people[i])) { fprintf(stderr, "%s: %s: write error: %s\n", argv[0], argv[1], strerror(errno)); (void) close(fd); return 1; } } /* all ok here */ (void) close(fd); return 0; }
Here are the results when the program is run: $ ch04-holes peoplelist $ ls -ls peoplelist 16 -rw-r--r-1 arnold $ echo 81944 / 4096 | bc -l 20.00585937500000000000
Run the program Show size and blocks used devel
81944 Mar 23 17:43 peoplelist
Show blocks if no holes
We happen to know that each disk block in the file uses 4096 bytes. (How we know that is discussed in Section 5.4.2, “Retrieving File Information,” page 141. For now, take it as a given.) The final bc command indicates that a file of size 81,944 bytes needs 21 disk blocks. However, the -s option to ls, which tells us how many blocks a file
Chapter 4 • Files and File I/O
106
really uses, shows that the file uses only 16 blocks!3 The missing blocks in the file are the holes. This is illustrated in Figure 4.2. Holes
Block:
arnold
miriam
joe
1
3
21
Logical file size
FIGURE 4.2 Holes in a file
NOTE ch04-holes.c does direct binary I/O. This nicely illustrates the beauty of random access I/O: You can treat a disk file as if it were a very large array of binary data structures. In practice, storing live data by using binary I/O is a design decision that you should consider carefully. For example, suppose you need to move the data to a system using different byte orders for integers? Or different floating-point formats? Or to a system with different alignment requirements? Ignoring such issues can become significantly costly.
4.6 Creating Files As described earlier, open() apparently opens existing files only. This section describes how brand-new files are created. There are two choices: creat() and open() with additional flags. Initially, creat() was the only way to create a file, but open() was later enhanced with this functionality as well. Both mechanisms require specification of the initial file permissions.
4.6.1 Specifying Initial File Permissions As a GNU/Linux user, you are familiar with file permissions as printed by ‘ls -l’: read, write, and execute for each of user (the file’s owner), group, and other. The various 3 At least three of these blocks contain the data that we wrote out; the others are for use by the operating system
in keeping track of where the data reside.
4.6 Creating Files
107
combinations are often expressed in octal, particularly for the chmod and umask commands. For example, file permissions -rw-r--r-- is equivalent to octal 0644 and -rwxr-xr-x is equivalent to octal 0755. (The leading 0 is C’s notation for octal values.) When you create a file, you must know the protections to be given to the new file. You can do this as a raw octal number if you choose, and indeed it’s not uncommon to see such numbers in older code. However, it is better to use a bitwise OR of one or more of the symbolic constants from , described in Table 4.5. TABLE 4.5 POSIX symbolic constants for file modes
Symbolic constant
Value
Meaning
S_IRWXU
00700
User read, write, and execute permission.
S_IRUSR
00400
User read permission. Same as S_IRUSR.
S_IREAD S_IWUSR
00200
Same as S_IWUSR.
S_IWRITE S_IXUSR
User write permission.
00100
User execute permission. Same as S_IXUSR.
S_IEXEC S_IRWXG
00070
Group read, write, and execute permission.
S_IRGRP
00040
Group read permission.
S_IWGRP
00020
Group write permission.
S_IXGRP
00010
Group execute permission.
S_IRWXO
00007
Other read, write, and execute permission.
S_IROTH
00004
Other read permission.
S_IWOTH
00002
Other write permission.
S_IXOTH
00001
Other execute permission.
The following fragment shows how to create variables representing permissions and -rwxr-xr-x (0644 and 0755 respectively):
-rw-r--r--
mode_t rw_mode, rwx_mode; rw_mode = S_IRUSR | S_IWUSR | S_IRGRP | S_IROTH; rwx_mode = S_IRWXU | S_IRGRP | S_IXGRP | S_IROTH | S_IXOTH;
/* 0644 */ /* 0755 */
Chapter 4 • Files and File I/O
108
Older code used S_IREAD, S_IWRITE, and S_IEXEC together with bit shifting to produce the same results: mode_t rw_mode, rwx_mode; rw_mode = (S_IREAD|S_IWRITE) | (S_IREAD >> 3) | (S_IREAD >> 6); /* 0644 */ rwx_mode = (S_IREAD|S_IWRITE|S_IEXEC) | ((S_IREAD|S_IEXEC) >> 3) | ((S_IREAD|S_IEXEC) >> 6); /* 0755 */
Unfortunately, neither notation is incredibly clear. The modern version is preferred since each permission bit has its own name and there is less opportunity to do the bitwise operations incorrectly. The additional permission bits shown in Table 4.6 are available for use when you are changing a file’s permission, but they should not be used when you initially create a file. Whether these bits may be included varies wildly by operating system. It’s best not to try; rather, you should explicitly change the permissions after the file is created. (Changing permission is described in Section 5.5.2, “Changing Permissions: chmod() and fchmod(),” page 156. The meanings of these bits is discussed in Chapter 11, “Permissions and User and Group ID Numbers,” page 403.) TABLE 4.6 Additional POSIX symbolic constants for file modes
Symbolic constant
Value
Meaning
S_ISUID
04000
Set user ID.
S_ISGID
02000
Set group ID.
S_ISVTX
01000
Save text.
When standard utilities create files, the default permissions they use are -rw-rw-rw(or 0666). Because most users prefer to avoid having files that are world-writable, each process carries with it a umask. The umask is a set of permission bits indicating those bits that should never be allowed when new files are created. (The umask is not used when changing permissions.) Conceptually, the operation that occurs is actual_permissions = (requested_permissions & (~umask));
The umask is usually set by the umask command in $HOME/.profile when you log in. From a C program, it’s set with the umask() system call:
4.6 Creating Files
109
POSIX
#include #include mode_t umask(mode_t mask);
The return value is the old umask. Thus, to determine the current mask, you must set it to a value and then reset it (or change it, as desired): mode_t mask = umask(0); (void) umask(mask);
/* retrieve current mask */ /* restore it */
Here is an example of the umask in action, at the shell level: $ umask 0022 $ touch newfile $ ls -l newfile -rw-r--r-1 arnold $ umask 0 $ touch newfile2 $ ls -l newfile2 -rw-rw-rw1 arnold
Show the current mask Create a file Show permissions of new file devel
0 Mar 24 15:43 newfile
Set mask to empty Create a second file Show permissions of new file devel
0 Mar 24 15:44 newfile2
4.6.2 Creating Files with creat() The creat()4 system call creates new files. It is declared as follows: #include #include #include
POSIX
int creat(const char *pathname, mode_t mode);
The mode argument represents the permissions for the new file (as discussed in the previous section). The file named by pathname is created, with the given permission as modified by the umask. It is opened for writing (only), and the return value is the file descriptor for the new file or -1 if there was a problem. In this case, errno indicates the error. If the file already exists, it will be truncated when opened. In all other respects, file descriptors returned by creat() are the same as those returned by open(); they’re used for writing and seeking and must be closed with close():
4 Yes, that’s how it’s spelled. Ken Thompson, one of the two “fathers” of Unix, was once asked what he would have done differently if he had it to do over again. He replied that he would have spelled creat() with an “e.”
Indeed, that is exactly what he did for the Plan 9 From Bell Labs operating system.
Chapter 4 • Files and File I/O
110
int fd, count; /* Error checking omitted for brevity */ fd = creat("/some/new/file", 0666); count = write(fd, "some data\n", 10); (void) close(fd);
4.6.3 Revisiting open() You may recall the declaration for open(): int open(const char *pathname, int flags, mode_t mode);
Earlier, we said that when opening a file for plain I/O, we could ignore the mode argument. Having seen creat(), though, you can probably guess that open() can also be used for creating files and that the mode argument is used in this case. This is indeed true. Besides the O_RDONLY, O_WRONLY, and O_RDWR flags, additional flags may be bitwise OR’d when open() is called. The POSIX standard mandates a number of these additional flags. Table 4.7 presents the flags that are used for most mundane applications. TABLE 4.7 Additional POSIX flags for open()
Flag
Meaning
O_APPEND
Force all writes to occur at the end of the file.
O_CREAT
Create the file if it doesn’t exist.
O_EXCL
When used with O_CREAT, cause open() to fail if the file already exists.
O_TRUNC
Truncate the file (set it to zero length) if it exists.
Given O_APPEND and O_TRUNC, you can imagine how the shell might open or create files corresponding to the > and >> operators. For example: int fd; extern char *filename; mode_t mode = S_IRUSR|S_IWUSR|S_IRGRP|S_IWGRP|S_IROTH|S_IWOTH;
/* 0666 */
fd = open(filename, O_CREAT|O_WRONLY|O_TRUNC, mode);
/* for > */
fd = open(filename, O_CREAT|O_WRONLY|O_APPEND, mode);
/* for >> */
4.6 Creating Files
111
Note that the O_EXCL flag would not be used here, since for both > and >>, it’s not an error for the file to exist. Remember also that the system applies the umask to the requested permissions. Also, it’s easy to see that, at least conceptually, creat() could be written this easily: int creat(const char *path, mode_t mode) { return open(path, O_CREAT|O_WRONLY|O_TRUNC, mode); }
NOTE If a file is opened with O_APPEND, all data will be written at the end of the file, even if the current position has been reset with lseek().
Modern systems provide additional flags whose uses are more specialized. Table 4.8 describes them briefly. TABLE 4.8 Additional advanced POSIX flags for open()
Flag
Meaning
O_NOCTTY
If the device being opened is a terminal, it does not become the process’s controlling terminal. (This is a more advanced topic, discussed briefly in Section 9.2.1, page 312.)
O_NONBLOCK Disables blocking of I/O operations in certain cases (see Section 9.4.3.4,
page 333). O_DSYNC
Ensure that data written to a file make it all the way to physical storage before write() returns.
O_RSYNC
Ensure that any data that read() would read, which may have been written to the file being read, have made it all the way to physical storage before read() returns.
O_SYNC
Like O_DSYNC, but also ensure that all file metadata, such as access times, have also been written to physical storage.
The O_DSYNC, O_RSYNC, and O_SYNC flags need some explanation. Unix systems (including Linux) maintain an internal cache of disk blocks, called the buffer cache. When the write() system call returns, the data passed to the operating system have been copied to a buffer in the buffer cache. They are not necessarily written out to the disk.
112
Chapter 4 • Files and File I/O
The buffer cache provides considerable performance improvement: Since disk I/O is often an order of magnitude or more slower than CPU and memory operations, programs would slow down considerably if they had to wait for every write to go all the way through to the disk. In addition, if data have recently been written to a file, a subsequent read of that same data will find the information already in the buffer cache, where it can be returned immediately instead of having to wait for an I/O operation to read it from the disk. Unix systems also do read-ahead; since most reads are sequential, upon reading one block, the operating system will read several more consecutive disk blocks so that their information will already be in the buffer cache when a program asks for it. If multiple programs are reading the same file, they all benefit since they will all get their data from the same copy of the file’s disk blocks in the buffer cache. All of this caching is wonderful, but of course there’s no free lunch. While data are in the buffer cache and before they have been written to disk, there’s a small—but very real—window in which disaster can strike; for example, if the power goes out. Modern disk drives exacerbate this problem: Many have their own internal buffers, so while data may have made it to the drive, it may not have made it onto the media when the power goes out! This can be a significant issue for small systems that aren’t in a data center with controlled power or that don’t have an uninterruptible power supply (UPS).5 For most applications, the chance that data in the buffer cache might be inadvertently lost is acceptably small. However, for some applications, any such chance is not acceptable. Thus, the notion of synchronous I/O was added to Unix systems, whereby a program can be guaranteed that if a system call has returned, the data are safely written on a physical storage device. The O_DSYNC flag guarantees data integrity; the data and any other information that the operating system needs to find the data are written to disk before write() returns. However, metadata, such as access and modification times, may not be written to disk. The O_SYNC flag requires that metadata also be written to disk before write() returns. (Here too there is no free lunch; synchronous writes can seriously affect the performance of a program, slowing it down noticeably.)
5 If you don’t have a UPS and you use your system for critical work, we highly recommend investing in one. You
should also be doing regular backups.
4.7 Forcing Data to Disk
113
The O_RSYNC flag is for data reads: If read() finds data in the buffer cache that were scheduled for writing to disk, then read() won’t return that data until they have been written to disk. The other two flags can affect this: In particular, O_SYNC will cause read() to wait until the file metadata have been written out as well. NOTE As of kernel version 2.4, Linux treats all three flags the same, with essentially the meaning of O_SYNC. Furthermore, Linux defines additional flags that are Linux specific and intended for specialized uses. Check the GNU/Linux open(2) manpage for more information.
4.7 Forcing Data to Disk Earlier, we described the O_DSYNC, O_RSYNC, and O_SYNC flags for open(). We noted that using these flags could slow a program down since each write() does not return until all data have been written to physical media. For a slightly higher risk level, we can have our cake and eat it too. We do this by opening a file without one of the O_xSYNC flags and then using one of the following two system calls at whatever point it’s necessary to have the data safely moved to physical storage: #include int fsync(int fd); int fdatasync(int fd);
POSIX FSC POSIX SIO
The fdatasync() system call is like O_DSYNC: It forces all file data to be written to the final physical device. The fsync() system call is like O_SYNC, forcing not just file data, but also file metadata, to physical storage. The fsync() call is more portable; it has been around in the Unix world for longer and is more likely to exist across a broad range of systems. You can use these calls with file pointers by first calling fflush() and then using fileno() to obtain the underlying file descriptor. Here is an fpsync() function that can be used to wrap both operations in one call. It returns 0 on success:
Chapter 4 • Files and File I/O
114
/* fpsync --- sync a stdio FILE * variable */ int fpsync(FILE *fp) { if (fp == NULL || fflush(fp) == EOF || fsync(fileno(fp)) < 0) return -1; return 0; }
Technically, both of these calls are extensions to the base POSIX standard: fsync() in the “File Synchronization” extension (FSC), and fdatasync() in the “Synchronized Input and Output” extension. Nevertheless, you can use them on a GNU/Linux system without any problem.
4.8 Setting File Length Two system calls make it possible to adjust the size of a file: #include #include int truncate(const char *path, off_t length); int ftruncate(int fd, off_t length);
XSI POSIX
As should be obvious from the parameters, truncate() takes a filename argument, whereas ftruncate() works on an open file descriptor. (The xxx() and fxxxx() naming convention for system call pairs that work on a filename or file descriptor is common. We see several examples in this and subsequent chapters.) For both, the length argument is the new size of the file. This system call originated in 4.2 BSD Unix, and in early systems could only be used to shorten a file’s length, hence the name. (It was created to simplify implementation of the truncate operation in Fortran.) On modern systems, including Linux, the name is a misnomer, since it’s possible to extend the length of a file with these calls, not just shorten a file. (However, POSIX indicates that the ability to extend a file is an XSI extension.) For these calls, the file being truncated must have write permission (for truncate()), or have been opened for writing (for ftruncate()). If the file is being shortened, any data past the new end of the file are lost. (Thus, you can’t shorten the file, lengthen it again, and expect to find the original data.) If the file is extended, as with data written after an lseek(), the data between the old end of the file and the new end of file read as zeros.
4.10 Exercises
115
These calls are very different from ‘open(file, ...|O_TRUNC, mode)’. The latter truncates a file completely, throwing away all its data. These calls simply set the file’s absolute length to the given value. These functions are fairly specialized; they’re used only four times in all of the GNU Coreutils code. We present an example use of ftruncate() in Section 5.5.3, “Changing Timestamps: utime(),” page 157.
4.9 Summary • When a system call fails, it usually returns -1, and the global variable errno is set to a predefined value indicating the problem. The functions perror() and strerror() can be used for reporting errors. • Files are manipulated by small integers called file descriptors. File descriptors for standard input, standard output, and standard error are inherited from a program’s parent process. Others are obtained with open() or creat(). They are closed with close(), and getdtablesize() returns the maximum number of allowed open files. The value of the umask (set with umask()) affects the permissions given to new files created with creat() or the O_CREAT flag for open(). • The read() and write() system calls read and write data, respectively. Their interface is simple. In particular, they do no interpretation of the data; files are linear streams of bytes. The lseek() system call provides random access I/O: the ability to move around within a file. • Additional flags for open() provide for synchronous I/O, whereby data make it all the way to the physical storage media before write() or read() return. Data can also be forced to disk on a controlled basis with fsync() or fdatasync(). • The truncate() and ftruncate() system calls set the absolute length of a file. (On older systems, they can only be used to shorten a file; on modern systems they can also extend a file.)
Exercises 1. Using just open(), read(), write(), and close(), write a simple copy program that copies the file named by its first argument to the file named by its second.
116
Chapter 4 • Files and File I/O
2. Enhance the copy program to accept "-" to mean “standard input” if used as the first argument and “standard output” as the second. Does ‘copy - -’ work correctly? 3. Look at the proc(5) manpage on a GNU/Linux system. In particular the fd subsection. Do an ‘ls -l /dev/fd’ and examine the files in the /proc/self/fd directly. If /dev/stdin and friends had been around in the early versions of Unix, how would that have simplified the code for the V7 cat program? (Many other modern Unix systems have a /dev/fd directory or filesystem. If you’re not using GNU/Linux, see what you can discover about your Unix version.) 4. Even though you don’t understand it yet, try to copy the code segment from the V7 cat.c that uses the struct stat and the fstat() function into ch04-cat.c so that it too reports an error for ‘cat file >> file’. 5. (Easy.) Assuming the existence of strerror(), write your own version of perror(). 6. What is the result of ‘ulimit -n’ on your system? 7. Write a simple version of the umask program, named myumask, that takes an octal mask on the command line. Use strtol() with a base of 8 to convert the character string command-line argument into an integer value. Change the umask to the new mask with the umask() system call. Compile and run myumask, and then examine the value of the umask with the regular umask command. Explain the results. (Hint: in Bash, enter ‘type umask’.) 8. Change the simple copy program you wrote earlier to use open() with the O_SYNC flag. Using the time command, compare the performance of the original version and the new version on a large file. 9. For ftruncate(), we said that the file must have been opened for writing. How can a file be open for writing when the file itself doesn’t have write permission? 10. Write a truncate program whose usage is ‘truncate filelength’.
5 Directories and File Metadata
In this chapter 5.1 Considering Directory Contents
page 118
5.2 Creating and Removing Directories
page 130
5.3 Reading Directories
page 132
5.4 Obtaining Information about Files
page 139
5.5 Changing Ownership, Permission, and Modification Times
page 155
5.6 Summary
page 162
Exercises
page 163
117
T
his chapter continues the climb up the learning curve toward the next plateau: understanding directories and information about files.
In this chapter we explore how file information is stored in a directory, how directories themselves are read, created, and removed, what information about files is available, and how to retrieve it. Finally, we explore other ways to update file metadata, such as the owner, group, permissions, and access and modification times.
5.1 Considering Directory Contents All Unix systems, including Linux, use the same conceptual design for storing file information on disk. Although there is considerable variation in the implementation of the design, the interface at the C level remains consistent, making it possible to write portable programs that compile and run on many different systems.
5.1.1 Definitions
Copyright 1997–2004 © J.D. “Illiad” Frazer Used with permission. http://www.userfriendly.org
We start the discussion by defining some terms. Partition A unit of physical storage. Physical partitions are typically either part of a disk or an entire disk. Modern systems make it possible to create logical partitions from multiple physical ones.
118
5.1 Considering Directory Contents
119
Filesystem A partition (physical or logical) that contains file data and metadata, information about files (as opposed to the file contents, which is information in the files). Such metadata include file ownership, permissions, size, and so on, as well as information for use by the operating system in locating file contents. You place filesystems “in” partitions (a one-to-one correspondence) by writing standard information in them. This is done with a user-level program, such as mke2fs on GNU/Linux, or newfs on Unix. (The Unix mkfs command makes partitions but is difficult to use directly. newfs calls it with the correct parameters. If your system is a Unix system, see the newfs(8) and mkfs(8) manpages for the details.) For the most part, GNU/Linux and Unix hide the existence of filesystems and partitions. (Further details are given in Section 8.1, “Mounting and Unmounting Filesystems,” page 228). Everything is accessed by pathnames, without reference to which disk a file lives on. (Contrast this with almost every other commercial operating system, such as OpenVMS, or the default behavior of any Microsoft system.) Inode Short for “index node,” initially abbreviated “i-node” and now written “inode.” A small block of information describing everything about a file except the file’s name(s). The number of inodes, and thus the number of unique files per filesystem, is set and made permanent when the filesystem is created. ‘df -i’ can tell you how many inodes you have and how many are used. Device In the context of files, filesystems, and file metadata, a unique number representing an in-use (“mounted”) filesystem. The (device, inode) pair uniquely identifies the file: Two different files are guaranteed to have different (device, inode) pairs. This is discussed in more detail later in this chapter. Directory A special file, containing a list of (inode number, name) pairs. Directories can be opened for reading but not for writing; the operating system makes all the changes to a directory’s contents.
Chapter 5 • Directories and File Metadata
120
Conceptually, each disk block contains either some number of inodes, or file data. The inode, in turn, contains pointers to the blocks that contain the file’s data. See Figure 5.1. Disk blocks, linearly in a partition
I N O D E
I N O D E
I N O D E
I N O D E
I N O D E
I N O D E
Data
Data
Data
Data
FIGURE 5.1 Conceptual view of inode and data blocks
The figure shows all the inode blocks at the front of the partition and the data blocks after them. Early Unix filesystems were indeed organized this way. However, while all modern systems still have inodes and data blocks, the organization has changed for improved efficiency and robustness. The details vary from system to system, and even within GNU/Linux systems there are multiple kinds of filesystems, but the concepts are the same.
5.1.2 Directory Contents Directories make the connection between a filename and an inode. Directory entries contain an inode number and a filename. They also contain additional bookkeeping information that is not of interest to us here. See Figure 5.2. Early Unix systems had two-byte inode numbers and up to 14-byte filenames. Here is the entire content of the V7 /usr/include/sys/dir.h:
5.1 Considering Directory Contents
23
.
Dot
19
..
Dot-dot
42
guide
Filename
0
tempdata
Empty slot
37
.profile
Filename
121
FIGURE 5.2 Conceptual directory contents #ifndef DIRSIZ #define DIRSIZ #endif struct direct { ino_t char };
14
d_ino; d_name[DIRSIZ];
An ino_t is defined in the V7 as ‘typedef unsigned int ino_t;’. Since a PDP-11 int is 16 bits, so too is the ino_t. This organization made it easy to read directories directly; since the size of an entry was fixed, the code was simple. (The only thing to watch out for was that a full 14-character d_name was not NUL-terminated.) Directory content management was also easy for the system. When a file was removed from a directory, the system replaced the inode number with a binary zero, signifying that the “slot” in the directory was unused. New files could then reuse the empty slot. This helped keep the size of directory files themselves reasonable. (By convention, inode number 1 is unused; inode number 2 is always the first usable inode. More details are provided in Section 8.1, “Mounting and Unmounting Filesystems,” page 228.) Modern systems provide long filenames. Each directory entry is of variable length, with a common limit of 255 bytes for the filename component of the directory. Later on, we show how to read a directory’s contents on a modern system. Modern systems also provide 32-bit (or even 64-bit!) inode numbers.
Chapter 5 • Directories and File Metadata
122
5.1.3 Hard Links When a file is created with open() or creat(), the system finds an unused inode and assigns it to the new file. It creates the directory entry for the file, with the file’s name and inode number in it. The -i option to ls shows the inode number: $ echo hello, world > message $ ls -il message 228786 -rw-r--r-1 arnold
Create new file Show inode number too devel
13 May
4 15:43 message
Since directory entries associate filenames with inodes, it is possible for one file to have multiple names. Each directory entry referring to the same inode is called a link, or hard link, to the file. Links are created with the ln command. The usage is ‘ln oldfile newfile’: $ ln message msg $ cat msg hello, world $ ls -il msg message 228786 -rw-r--r-228786 -rw-r--r--
Create a link Show contents of new name Show inode numbers 2 arnold 2 arnold
devel devel
13 May 13 May
4 15:43 message 4 15:43 msg
The output shows that the inode numbers of the two files are the same, and the third field in the long output is now 2. This field is the link count, which reports how many links (directory entries referring to the inode) the file has. It cannot be emphasized enough: Hard links all refer to the same file. If you change one, you have changed the others: $ echo "Hi, how ya doin' ?" > msg $ cat message Hi, how ya doin' ? $ ls -il message msg 228786 -rw-r--r-2 arnold devel 228786 -rw-r--r-2 arnold devel
Change file by new name Show contents by old name Show info. Size changed 19 May 19 May
4 15:51 message 4 15:51 msg
Although we’ve created two links to the same file in a single directory, hard links are not restricted to being in the same directory; they can be in any other directory on the same filesystem. (This is discussed a bit more in Section 5.1.6, “Symbolic Links,” page 128.) Additionally, you can create a link to a file you don’t own as long as you have write permission in the directory in which you’re creating the link. (Such a file retains all the attributes of the original file: the owner, permissions, and so on. This is because it is the original file; it has only acquired an additional name.) User-level code cannot create a hard link to a directory.
5.1 Considering Directory Contents
123
Once a link is removed, creating a new file by the same name as the original file creates a new file: Remove old name Reuse the name Show information
$ rm message $ echo "What's happenin?" > message $ ls -il msg message 228794 -rw-r--r-1 arnold devel 228786 -rw-r--r-1 arnold devel
17 May 19 May
4 15:58 message 4 15:51 msg
Notice that the link counts for both files are now equal to 1. At the C level, links are created with the link() system call: POSIX
#include int link(const char *oldpath, const char *newpath);
The return value is 0 if the link was created successfully, or -1 otherwise, in which case errno reflects the error. An important failure case is one in which newpath already exists. The system won’t remove it for you, since attempting to do so can cause inconsistencies in the filesystem. 5.1.3.1 The GNU link Program The ln program is complicated and large. However, the GNU Coreutils contains a simple link program that just calls link() on its first two arguments. The following example shows the code from link.c, with some irrelevant parts deleted. Line numbers relate to the actual file. 20 21 22 23
/* Implementation overview: Simply call the system 'link' function */
… #include statements omitted for brevity … 34 35 36 37 38 39 40 41 42 43 44 45
/* The official name of this program (e.g., no `g' prefix). #define PROGRAM_NAME "link" #define AUTHORS "Michael Stone" /* Name this program was run with. char *program_name; void usage (int status) {
… omitted for brevity … 62 63
}
*/
*/
Chapter 5 • Directories and File Metadata
124
64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102
int main (int argc, char **argv) { program_name = argv[0]; setlocale (LC_ALL, ""); bindtextdomain (PACKAGE, LOCALEDIR); textdomain (PACKAGE); atexit (close_stdout); parse_long_options (argc, argv, PROGRAM_NAME, GNU_PACKAGE, VERSION, AUTHORS, usage); /* The above handles --help and --version. Since there is no other invocation of getopt, handle `--' here. if (1 < argc && STREQ (argv[1], "--")) { --argc; ++argv; }
*/
if (argc < 3) { error (0, 0, _("too few arguments")); usage (EXIT_FAILURE); } if (3 < argc) { error (0, 0, _("too many arguments")); usage (EXIT_FAILURE); } if (link (argv[1], argv[2]) != 0) error (EXIT_FAILURE, errno, _("cannot create link %s to %s"), quote_n (0, argv[2]), quote_n (1, argv[1])); exit (EXIT_SUCCESS); }
Lines 67–75 are typical Coreutils boilerplate, setting up internationalization, the final action upon exit, and parsing the arguments. Lines 79–95 make sure that link is called with only two arguments. The link() system call itself occurs on line 97. (The quote_n() function provides quoting of the arguments in a style suitable for the current locale; the details aren’t important here.)
5.1 Considering Directory Contents
125
5.1.3.2 Dot and Dot-Dot Rounding off the discussion of links, let’s look at how the ‘.’ and ‘..’ special names are managed. They are really just hard links. In the first case, ‘.’ is a hard link to the directory containing it, and ‘..’ is a hard link to the parent directory. The operating system creates these links for you; as mentioned earlier, user-level code cannot create a hard link to a directory. This example illustrates the links: $ pwd /tmp $ ls -ldi /tmp 225345 drwxrwxrwt $ mkdir x $ ls -ldi x 52794 drwxr-xr-x $ ls -ldi x/. x/.. 52794 drwxr-xr-x 225345 drwxrwxrwt
Show current directory Show its inode number 14 root
root
4096 May
4 16:15 /tmp
Create a new directory And show its inode number 2 arnold
devel
4096 May
4 16:27 x
Show . and .. inode numbers 2 arnold 15 root
devel root
4096 May 4096 May
4 16:27 x/. 4 16:27 x/..
The root’s parent directory (/..) is a special case; we defer discussion of it until Chapter 8, “Filesystems and Directory Walks,” page 227.
5.1.4 File Renaming Given the way in which directory entries map names to inode numbers, renaming a file is conceptually quite easy: 1. If the new name for the file names an existing file, remove the existing file first. 2. Create a new link to the file by the new name. 3. Remove the old name (link) for the file. (Removing names is discussed in the next section.) Early versions of the mv command did work this way. However, when done this way, file renaming is not atomic; that is, it doesn’t happen in one uninterruptible operation. And, on a heavily loaded system, a malicious user could take advantage of race conditions,1 subverting the rename operation and substituting a different file for the original one.
1 A race condition is a situation in which details of timing can produce unintended side effects or bugs. In this case,
the directory, for a short period of time, is in an inconsistent state, and it is this inconsistency that introduces the vulnerability.
Chapter 5 • Directories and File Metadata
126
For this reason, 4.2 BSD introduced the rename() system call: #include
ISO C
int rename(const char *oldpath, const char *newpath);
On Linux systems, the renaming operation is atomic; the manpage states: If newpath already exists it will be atomically replaced …, so that there is no point at which another process attempting to access newpath will find it missing. If newpath exists but the operation fails for some reason, rename guarantees to leave an instance of newpath in place. However, when overwriting there will probably be a window in which both oldpath and newpath refer to the file being renamed. As with other system calls, a 0 return indicates success, and a return value of -1 indicates an error.
5.1.5 File Removal Removing a file means removing the file’s entry in the directory and decrementing the file’s link count (maintained in the inode). The contents of the file, and the disk blocks holding them, are not freed until the link count reaches zero. The system call is named unlink(): #include
POSIX
int unlink(const char *pathname);
Given our discussion of file links, the name makes sense; this call removes the given link (directory entry) for the file. It returns 0 on success and -1 on error. The ability to remove a file requires write permission only for the directory and not for the file itself. This fact can be confusing, particularly for new Linux/Unix users. However, since the operation is one on the directory, this makes sense; it is the directory contents that are being modified, not the file’s contents.2
2 Indeed, the file’s metadata are changed (the number of links), but that does not affect any other file attribute,
nor does it affect the file’s contents. Updating the link count is the only operation on a file that doesn’t involve checking the file’s permissions.
5.1 Considering Directory Contents
127
5.1.5.1 Removing Open Files Since the earliest days of Unix, it has been possible to remove open files. Simply call unlink() with the filename after a successful call to open() or creat(). At first glance, this seems to be a strange thing to do. Since the system frees the data blocks when a file’s link count goes to zero, is it even possible to use the open file? The answer is yes, you can continue to use the open file normally. The system knows that the file is open, and therefore it delays the release of the file’s storage until the last file descriptor on the file is closed. Once the file is completely unused, the storage is freed. This operation also happens to be a useful one: It is an easy way for a program to get temporary file storage that is guaranteed to be both private and automatically released when no longer needed: /* Obtaining private temporary storage, error checking omitted for brevity */ int fd; mode_t mode = O_CREAT|O_EXCL|O_TRUNC|O_RDWR; fd = open("/tmp/myfile", mode, 0000); unlink("/tmp/myfile");
Open the file Remove it
… continue to use file … close(fd);
Close file, free storage
The downside to this approach is that it’s also possible for a runaway application to fill up a filesystem with an open but anonymous file, in which case the system administrator has to try to find and kill the process. In olden days, a reboot and filesystem consistency check might have been required; thankfully, this is exceedingly rare on modern systems. 5.1.5.2 Using ISO C: remove() ISO C provides the remove() function for removing files; this is intended to be a general function, usable on any system that supports ISO C, not just Unix and GNU/Linux: #include
ISO C
int remove(const char *pathname);
While not technically a system call, the return value is in the same vein: 0 on success and -1 on error, with errno reflecting the value.
128
Chapter 5 • Directories and File Metadata
On GNU/Linux, remove() uses the unlink() system call to remove files, and the system call (discussed later in the chapter) to remove directories. (On older GNU/Linux systems not using GLIBC, remove() is an alias for unlink(); this fails on directories. If you have such a system, you should probably upgrade it.) rmdir()
5.1.6 Symbolic Links We started the chapter with a discussion of partitions, filesystems, and inodes. We also saw that directory entries associate names with inode numbers. Because directory entries contain no other information, hard links are restricted to files within the same filesystem. This has to be; there is no way to distinguish inode 2341 on one filesystem from inode 2341 on another filesystem. Here is what happens when we try: $ mount Show filesystems in use /dev/hda2 on / type ext3 (rw) /dev/hda5 on /d type ext3 (rw) ... $ ls -li /tmp/message Earlier example was on filesystem for / 228786 -rw-r--r-2 arnold devel 19 May 4 15:51 /tmp/message $ cat /tmp/message Hi, how ya doin' ? $ /bin/pwd Current directory is on a different filesystem /d/home/arnold $ ln /tmp/message . Attempt the link ln: creating hard link `./message' to `/tmp/message': Invalid cross-device link
Large systems often have many partitions, both on physically attached local disks and on remotely mounted network filesystems. The hard-link restriction to the same filesystem is inconvenient, for example, if some files or directories must be moved to a new location, but old software uses a hard-coded filename for the old location. To get around this restriction, 4.2 BSD introduced symbolic links. A symbolic link (also referred to as a soft link) is a special kind of file (just as a directory is a special kind of file). The contents of the file are the pathname of the file being “pointed to.” All modern Unix systems, including Linux, provide symbolic links; indeed they are now part of POSIX. Symbolic links may refer to any file anywhere on the system. They may also refer to directories. This makes it easy to move directories from place to place, with a symbolic link left behind in the original location pointing to the new location.
5.1 Considering Directory Contents
129
When processing a filename, the system notices symbolic links and instead performs the action on the pointed-to file or directory. Symbolic links are created with the -s option to ln: $ /bin/pwd /d/home/arnold $ ln -s /tmp/message ./hello $ cat hello Hi, how ya doin' ? $ ls -l hello lrwxrwxrwx 1 arnold devel
Where are we On a different filesystem Create a symbolic link Use it Show information about it 12 May
4 16:41 hello -> /tmp/message
The file pointed to by the link need not exist. The system detects this at runtime and acts appropriately: $ rm /tmp/message $ cat ./hello cat: ./hello: No such file or directory $ echo hi again > hello $ ls -l /tmp/message -rw-r--r-1 arnold devel 9 May $ cat /tmp/message hi again
Remove pointed-to file Attempt to use it by the soft link Create new file contents Show pointed-to file info … 4 16:45 /tmp/message
… and contents
Symbolic links are created with the symlink() system call: POSIX
#include int symlink(const char *oldpath, const char *newpath);
The oldpath argument names the pointed-to file or directory, and newpath is the name of the symbolic link to be created. The return value is 0 on success and -1 on error; see your symlink(2) manpage for the possible errno values. Symbolic links have their disadvantages: • They take up extra disk space, requiring a separate inode and data block. Hard links take up only a directory slot. • They add overhead. The kernel has to work harder to resolve a pathname containing symbolic links. • They can introduce “loops.” Consider the following: $ rm -f $ ln -s $ ln -s $ cat a cat: a:
a b a b b a Too many levels of symbolic links
Make sure ‘a’ and ‘b’ don’t exist Symlink old file ‘a’ to new file ‘b’ Symlink old file ‘b’ to new file ‘a’ What happens?
Chapter 5 • Directories and File Metadata
130
The kernel has to be able to detect this case and produce an error message. • They are easy to break. If you move the pointed-to file to a different location or rename it, the symbolic link is no longer valid. This can’t happen with a hard link.
5.2 Creating and Removing Directories Creating and removing directories is straightforward. The two system calls, not surprisingly, are mkdir() and rmdir(), respectively: #include #include
POSIX
int mkdir(const char *pathname, mode_t mode); #include
POSIX
int rmdir(const char *pathname);
Both return 0 on success and -1 on error, with errno set appropriately. For mkdir(), the mode argument represents the permissions to be applied to the directory. It is completely analogous to the mode arguments for creat() and open() discussed in Section 4.6, “Creating Files,” page 106. Both functions handle the ‘.’ and ‘..’ in the directory being created or removed. A directory must be empty before it can be removed; errno is set to ENOTEMPTY if the directory isn’t empty. (In this case, “empty” means the directory contains only ‘.’ and ‘..’.) New directories, like all files, are assigned a group ID number. Unfortunately, how this works is complicated. We delay discussion until Section 11.5.1, “Default Group for New Files and Directories,” page 412. Both functions work one directory level at a time. If /somedir exists and /somedir/sub1 does not, ‘mkdir("/somedir/sub1/sub2")’ fails. Each component in a long pathname has to be created individually (thus the -p option to mkdir, see mkdir(1)). Also, if pathname ends with a / character, mkdir() and rmdir() will fail on some systems and succeed on others. The following program, ch05-trymkdir.c, demonstrates both aspects.
5.2 Creating and Removing Directories
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40
131
/* ch05-trymkdir.c --- Demonstrate mkdir() behavior. Courtesy of Nelson H.F. Beebe. */ #include #include #include #if !defined(EXIT_SUCCESS) #define EXIT_SUCCESS 0 #endif void do_test(const char *path) { int retcode; errno = 0; retcode = mkdir(path, 0755); printf("mkdir(\"%s\") returns %d: errno = %d [%s]\n", path, retcode, errno, strerror(errno)); } int main(void) { do_test("/tmp/t1/t2/t3/t4"); do_test("/tmp/t1/t2/t3"); do_test("/tmp/t1/t2"); do_test("/tmp/t1");
Attempt creation in subdirs
do_test("/tmp/u1"); do_test("/tmp/u1/u2"); do_test("/tmp/u1/u2/u3"); do_test("/tmp/u1/u2/u3/u4");
Make subdirs
do_test("/tmp/v1/"); do_test("/tmp/v1/v2/"); do_test("/tmp/v1/v2/v3/"); do_test("/tmp/v1/v2/v3/v4/");
How is trailing ‘/’ handled?
return (EXIT_SUCCESS); }
Here are the results under GNU/Linux: $ ch05-trymkdir mkdir("/tmp/t1/t2/t3/t4") returns -1: errno = 2 [No such file or directory] mkdir("/tmp/t1/t2/t3") returns -1: errno = 2 [No such file or directory] mkdir("/tmp/t1/t2") returns -1: errno = 2 [No such file or directory] mkdir("/tmp/t1") returns 0: errno = 0 [Success] mkdir("/tmp/u1") returns 0: errno = 0 [Success] mkdir("/tmp/u1/u2") returns 0: errno = 0 [Success]
Chapter 5 • Directories and File Metadata
132
mkdir("/tmp/u1/u2/u3") returns 0: errno = 0 [Success] mkdir("/tmp/u1/u2/u3/u4") returns 0: errno = 0 [Success] mkdir("/tmp/v1/") returns 0: errno = 0 [Success] mkdir("/tmp/v1/v2/") returns 0: errno = 0 [Success] mkdir("/tmp/v1/v2/v3/") returns 0: errno = 0 [Success] mkdir("/tmp/v1/v2/v3/v4/") returns 0: errno = 0 [Success]
Note how GNU/Linux accepts a trailing slash. Not all systems do.
5.3 Reading Directories On the original Unix systems, reading directory contents was easy. A program opened the directory with open() and read binary struct direct structures directly, 16 bytes at a time. The following fragment of code is from the V7 rmdir program,3 lines 60–74. It shows the check for the directory being empty. 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74
if((fd = open(name,0)) < 0) { fprintf(stderr, "rmdir: %s unreadable\n", name); ++Errors; return; } while(read(fd, (char *)&dir, sizeof dir) == sizeof dir) { if(dir.d_ino == 0) continue; if(!strcmp(dir.d_name, ".") || !strcmp(dir.d_name, "..")) continue; fprintf(stderr, "rmdir: %s not empty\n", name); ++Errors; close(fd); return; } close(fd);
Line 60 opens the directory for reading (a second argument of 0, equal to O_RDONLY). Line 65 reads the struct direct. Line 66 is the check for an empty directory slot; that is, one with an inode number of 0. Lines 67 and 68 check for ‘.’ and ‘..’. Upon reaching line 69, we know that some other filename has been seen and, therefore, that the directory isn’t empty. (The test ‘!strcmp(s1, s2)’ is a shorter way of saying ‘strcmp(s1, s2) == 0’; that is, testing that the strings are equal. For what it’s worth, we consider the ‘!strcmp(s1, s2)’ form to be poor style. As Henry Spencer once said, “strcmp() is not a boolean!”)
3 See /usr/src/cmd/rmdir.c in the V7 distribution.
5.3 Reading Directories
133
When 4.2 BSD introduced a new filesystem format that allowed longer filenames and provided better performance, it also introduced several new functions to provide a directory-reading abstraction. This suite of functions is usable no matter what the underlying filesystem and directory organization are. The basic parts of it are what is standardized by POSIX, and programs using it are portable across GNU/Linux and Unix systems.
5.3.1 Basic Directory Reading Directory entries are represented by a struct dirent (not the same as the V7 struct direct!): struct dirent { ... ino_t d_ino; char d_name[...]; ... };
/* XSI extension --- see text */ /* See text on the size of this array */
For portability, POSIX specifies only the d_name field, which is a zero-terminated array of bytes representing the filename part of the directory entry. The size of d_name is not specified by the standard, other than to say that there may be at most NAME_MAX bytes before the terminating zero. (NAME_MAX is defined in .) The XSI extension to POSIX provides for the d_ino inode number field. In practice, since filenames can be of variable length and NAME_MAX is usually fairly large (like 255), the struct dirent contains additional members that aid in the bookkeeping of variable-length directory entries on disk. These additional members are not relevant for everyday code. The following functions provide the directory-reading interface: POSIX
#include #include DIR *opendir(const char *name); struct dirent *readdir(DIR *dir); int closedir(DIR *dir); void rewinddir(DIR *dirp);
Open a directory for reading Return one struct dirent at a time Close an open directory Return to the front of a directory
The DIR type is analogous to the FILE type in . It is an opaque type, meaning that application code is not supposed to know what’s inside it; its contents are for use by the other directory routines. If opendir() returns NULL, the named directory could not be opened for reading and errno is set to indicate the error.
Chapter 5 • Directories and File Metadata
134
Once you have an open DIR * variable, it can be used to retrieve a pointer to a struct dirent representing the next directory entry. readdir() returns NULL upon end-of-file or error. Finally, closedir() is analogous to the fclose() function in ; it closes the open DIR * variable. The rewinddir() function can be used to start over at the beginning of a directory. With these routines in hand (or at least in the C library), we can write a simple catdir program that “cats” the contents of a directory. Such a program is presented in ch05-catdir.c: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
/* ch05-catdir.c --- Demonstrate opendir(), readdir(), closedir(). */ #include #include #include #include
/* /* /* /*
for for for for
printf() etc. */ errno */ system types */ directory functions */
char *myname; int process(char *dir); /* main --- loop over directory arguments */ int main(int argc, char **argv) { int i; int errs = 0; myname = argv[0]; if (argc == 1) errs = process("."); /* default to current directory */ else for (i = 1; i < argc; i++) errs += process(argv[i]); return (errs != 0); }
This program is quite similar to ch04-cat.c (see Section 4.2, “Presenting a Basic Program Structure,” page 84); the main() function is almost identical. The primary difference is that it defaults to using the current directory if there are no arguments (lines 20–21).
5.3 Reading Directories
29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64
135
/* * process --- do something with the directory, in this case, * print inode/name pairs on standard output. * Returns 0 if all ok, 1 otherwise. */ int process(char *dir) { DIR *dp; struct dirent *ent; if ((dp = opendir(dir)) == NULL) { fprintf(stderr, "%s: %s: cannot open for reading: %s\n", myname, dir, strerror(errno)); return 1; } errno = 0; while ((ent = readdir(dp)) != NULL) printf("%8ld %s\n", ent->d_ino, ent->d_name); if (errno != 0) { fprintf(stderr, "%s: %s: reading directory entries: %s\n", myname, dir, strerror(errno)); return 1; } if (closedir(dp) != 0) { fprintf(stderr, "%s: %s: closedir: %s\n", myname, dir, strerror(errno)); return 1; } return 0; }
The process() function does all the work, and the majority of it is error-checking code. The heart of the function is lines 48 and 49: while ((ent = readdir(dp)) != NULL) printf("%8ld %s\n", ent->d_ino, ent->d_name);
This loop reads directory entries, one at a time, until readdir() returns NULL. The loop body prints the inode number and filename of each entry. Here’s what happens when the program is run:
136
$ ch05-catdir 639063 . 639062 .. 639064 proposal.txt 639012 lightsabers.url 688470 code 638976 progex.texi 639305 texinfo.tex 639007 15-processes.texi 639011 00-preface.texi 639020 18-tty.texi 638980 Makefile 639239 19-i18n.texi ...
Chapter 5 • Directories and File Metadata
Default to current directory
The output is not sorted in any way; it represents the linear contents of the directory. (We describe how to sort the directory contents in Section 6.2, “Sorting and Searching Functions,” page 181.) 5.3.1.1 Portability Considerations There are several portability considerations. First, you should not assume that the first two entries returned by readdir() will always be ‘.’ and ‘..’. Many filesystems use directory organizations that are different from that of the original Unix design, and ‘.’ and ‘..’ could be in the middle of the directory or possibly not even present.4 Second, the POSIX standard is silent about possible values for d_ino. It does say that the returned structures represent directory entries for files; this implies that empty slots are not returned by readdir(), and thus the GNU/Linux readdir() implementation doesn’t bother returning entries when ‘d_ino == 0’; it continues to the next valid directory entry. So, on GNU/Linux and Unix systems at least, it is unlikely that d_ino will ever be zero. However, it is best to avoid using this field entirely if you can. Finally, some systems use d_fileno instead of d_ino inside the struct dirent. Be aware of this if you have to port directory-reading code to such systems.
4 GNU/Linux systems are capable of mounting filesystems from many non-Unix operating systems. Many com-
mercial Unix systems can also mount MS-DOS filesystems. Assumptions about Unix filesystems don’t apply in such cases.
5.3 Reading Directories
137
Indirect System Calls “Don’t try this at home, kids!” —Mr. Wizard—
Many system calls, such as open(), read(), and write(), are meant to be called directly from user-level application code: in other words, from code that you, as a GNU/Linux developer, would write. However, other system calls exist only to make it possible to implement higher-level, standard library functions and should not be called directly. The GNU/Linux getdents() system call is one such; it reads multiple directory entries into a buffer provided by the caller—in this case, the code that implements readdir(). The readdir() code then returns valid directory entries from the buffer, one at a time, refilling the buffer as needed. These for-library-use-only system calls can be distinguished from for-user-use system calls by their appearance in the manpage. For example, from getdents(2): NAME getdents SYNOPSIS #include #include #include #include
- get directory entries
_syscall3(int, getdents, uint, fd, struct dirent *, dirp, uint, count); int getdents(unsigned int fd, struct dirent *dirp, unsigned int count);
Any system call that uses a _syscallX() macro should not be called by application code. (More information on these calls can be found in the intro(2) manpage; you should read that manpage if you haven’t already.) In the case of getdents(), many other Unix systems have a similar system call; sometimes with the same name, sometimes with a different name. Thus, trying to use these calls would only lead to a massive portability mess anyway; you’re much better off in all cases using readdir(), whose interface is well defined, standard, and portable.
5.3.1.2 Linux and BSD Directory Entries Although we just said that you should only use the d_ino and d_name members of the struct dirent, it’s worth knowing about the d_type member in the BSD and Linux struct dirent. This is an unsigned char value that stores the type of the file named by the directory entry:
Chapter 5 • Directories and File Metadata
138
struct dirent { ... ino_t d_ino; char d_name[...]; unsigned char d_type; ... };
d_type
/* As before */ /* As before */ /* Linux and modern BSD */
can have any of the values described in Table 5.1.
TABLE 5.1 Values for d_type
Name
Meaning
DT_BLK
Block device file.
DT_CHR
Character device file.
DT_DIR
Directory.
DT_FIFO
FIFO or named pipe.
DT_LNK
Symbolic link.
DT_REG
Regular file.
DT_SOCK
Socket.
DT_UNKNOWN
Unknown file type.
DT_WHT
Whiteout entry (BSD systems only).
Knowing the file’s type just by reading the directory entry is very handy; it can save a possibly expensive stat() system call. (The stat() call is described shortly, in Section 5.4.2, “Retrieving File Information,” page 141.)
5.3.2 BSD Directory Positioning Functions Occasionally, it’s useful to mark the current position in a directory in order to be able to return to it later. For example, you might be writing code that traverses a directory tree and wish to recursively enter each subdirectory as you come across it. (How to distinguish files from directories is discussed in the next section.) For this reason, the original BSD interface included two additional routines: #include
XSI
/* Caveat Emptor: POSIX XSI uses long, not off_t, for both functions */ off_t telldir(DIR *dir); Return current position void seekdir(DIR *dir, off_t offset); Move to given position
5.4 Obtaining Information about Files
139
These routines are similar to the ftell() and fseek() functions in . They return the current position in a directory and set the current position to a previously retrieved value, respectively. These routines are included in the XSI part of the POSIX standard, since they make sense only for directories that are implemented with linear storage of directory entries. Besides the assumptions made about the underlying directory structure, these routines are riskier to use than the simple directory-reading routines. This is because the contents of a directory might be changing dynamically: As files are added to or removed from a directory, the operating system adjusts the contents of the directory. Since directory entries are of variable length, it may be that the absolute offset saved at an earlier time no longer represents the start of a directory entry! Thus, we don’t recommend that you use these functions unless you have to.
5.4 Obtaining Information about Files Reading a directory to retrieve filenames is only half the battle. Once you have a filename, you need to know how to retrieve the other information associated with a file, such as the file’s type, its permissions, owner, and so on.
5.4.1 Linux File Types Linux (and Unix) supports the following different kinds of file types: Regular files As the name implies; used for data, executable programs, and anything else you might like. In an ‘ls -l’ listing, they show up with a ‘-’ in the first character of the permissions (mode) field. Directories Special files for associating file names with inodes. In an ‘ls -l’ listing, they show up with a d in the first character of the permissions field. Symbolic links As described earlier in the chapter. In an ‘ls -l’ listing, they show up with an l (letter “ell,” not digit 1) in the first character of the permissions field.
140
Chapter 5 • Directories and File Metadata
Devices Files representing both physical hardware devices and software pseudo-devices. There are two kinds: Block devices Devices on which I/O happens in chunks of some fixed physical record size, such as disk drives and tape drives. Access to such devices goes through the kernel’s buffer cache. In an ‘ls -l’ listing, they show up with a b in the first character of the permissions field. Character devices Also known as raw devices. Originally, character devices were those on which I/O happened a few bytes at a time, such as terminals. However, the character device is also used for direct I/O to block devices such as tapes and disks, bypassing the buffer cache.5 In an ‘ls -l’ listing, they show up with a c in the first character of the permissions field. Named pipes Also known as FIFOs (“first-in first-out”) files. These special files act like pipes; data written into them by one program can be read by another; no data go to or from the disk. FIFOs are created with the mkfifo command; they are discussed in Section 9.3.2, “FIFOs,” page 319. In an ‘ls -l’ listing, they show up with a p in the first character of the permissions field. Sockets Similar in purpose to named pipes,6 they are managed with the socket interprocess communication (IPC) system calls and are not otherwise dealt with in this book. In an ‘ls -l’ listing, they show up with an s in the first character of the permissions field.
5 Linux uses the block device for disks exclusively. Other systems use both. 6 Named pipes and sockets were developed independently by the System V and BSD Unix groups, respectively.
As Unix systems reconverged, both kinds of files became universally available.
5.4 Obtaining Information about Files
141
5.4.2 Retrieving File Information Three system calls return information about files: POSIX
#include #include #include int stat(const char *file_name, struct stat *buf); int fstat(int filedes, struct stat *buf); int lstat(const char *file_name, struct stat *buf);
The stat() function accepts a pathname and returns information about the given file. It follows symbolic links; that is, when applied to a symbolic link, stat() returns information about the pointed-to file, not about the link itself. For those times when you want to know if a file is a symbolic link, use the lstat() function instead; it does not follow symbolic links. The fstat() function retrieves information about an already open file. It is particularly useful for file descriptors 0, 1, and 2, (standard input, output, and error) which are already open when a process starts up. However, it can be applied to any open file. (An open file descriptor will never relate to a symbolic link; make sure you understand why.) The value passed in as the second parameter should be the address of a struct stat, declared in . As with the struct dirent, the struct stat contains at least the following members: struct stat { ... dev_t ino_t mode_t nlink_t uid_t gid_t dev_t off_t blksize_t blkcnt_t time_t time_t time_t ... };
st_dev; st_ino; st_mode; st_nlink; st_uid; st_gid; st_rdev; st_size; st_blksize; st_blocks; st_atime; st_mtime; st_ctime;
/* /* /* /* /* /* /* /* /* /* /* /* /*
device */ inode */ type and protection */ number of hard links */ user ID of owner */ group ID of owner */ device type (block or character device) */ total size, in bytes */ blocksize for filesystem I/O */ number of blocks allocated */ time of last access */ time of last modification */ time of last inode change */
(The layout may be different on different architectures.) This structure uses a number of typedef’d types. Although they are all (typically) integer types, the use of specially
Chapter 5 • Directories and File Metadata
142
defined types allows them to have different sizes on different systems. This keeps userlevel code that uses them portable. Here is a fuller description of each field. st_dev
The device for a mounted filesystem. Each mounted filesystem has a unique value for st_dev. st_ino
The file’s inode number within the filesystem. The (st_dev, st_ino) pair uniquely identifies the file. st_mode
The file’s type and its permissions encoded together in one field. We will shortly see how to extract this information. st_nlink
The number of hard links to the file (the link count). This can be zero if the file was unlinked after being opened. st_uid
The file’s UID (owner number). st_gid
The file’s GID (group number). st_rdev
The device type if the file is a block or character device. st_rdev encodes information about the device. We will shortly see how to extract this information. This field has no meaning if the file is not a block or character device. st_size
The logical size of the file. As mentioned in Section 4.5, “Random Access: Moving Around within a File,” page 102, a file may have holes in it, in which case the size may not reflect the true amount of storage space that it occupies. st_blksize
The “block size” of the file. This represents the preferred size of a data block for I/O to or from the file. This is almost always larger than a physical disk sector. Older Unix systems don’t have this field (or st_blocks) in the struct stat. For the Linux ext2 and ext3 filesystems, this value is 4096.
5.4 Obtaining Information about Files
143
st_blocks
The number of “blocks” used by the file. On Linux, this is in units of 512-byte blocks. On other systems, the size of a block may be different; check your local stat(2) manpage. (This number comes from the DEV_BSIZE constant in . This constant isn’t standardized, but it is fairly widely used on Unix systems.) The number of blocks may be more than ‘st_size / 512’; besides the data blocks, a filesystem may use additional blocks to store the locations of the data blocks. This is particularly necessary for large files. st_atime
The file’s access time; that is, the last time the file’s data were read. st_mtime
The file’s modification time; that is, the last time the file’s data were written or truncated. st_ctime
The file’s inode change time. This indicates the last time when the file’s metadata changed, such as the permissions or the owner. NOTE The st_ctime field is not the file’s “creation time”! There is no such thing in a Linux or Unix system. Some early documentation referred to the st_ctime field as the creation time. This was a misguided effort to simplify the presentation of the file metadata.
The time_t type used for the st_atime, st_mtime, and st_ctime fields represents dates and times. These time-related values are sometimes termed timestamps. Discussion of how to use a time_t value is delayed until Section 6.1, “Times and Dates,” page 166. Similarly, the uid_t and gid_t types represent user and group ID numbers, which are discussed in Section 6.3, “User and Group Names,” page 195. Most of the other types are not of general interest.
5.4.3 Linux Only: Specifying Higher-Precision File Times The 2.6 and later Linux kernel supplies three additional fields in the struct stat. These provide nanosecond resolution on the file times:
Chapter 5 • Directories and File Metadata
144
st_atime_nsec
The nanoseconds component of the file’s access time.
st_mtime_nsec
The nanoseconds component of the file’s modification time.
st_ctime_nsec
The nanoseconds component of the file’s inode change time.
Some other systems also provide such high-resolution time fields, but the member names for the struct stat are not standardized, making it difficult to write portable code that uses these times. (See Section 14.3.2, “Microsecond File Times: utimes(),” page 545, for a related advanced system call.)
5.4.4 Determining File Type Recall that the st_mode field encodes both the file’s type and its permissions. defines a number of macros that determine the file’s type. In particular, these macros return true or false when applied to the st_mode field. The macros correspond to each of the file types described earlier. Assume that the following code has been executed: struct stat stbuf; char filename[PATH_MAX];
/* PATH_MAX is from */
… fill in filename with a file name … if (stat(filename, & stbuf) < 0) { /* handle error */ }
Once stbuf has been filled in by the system, the following macros can be called, being passed stbuf.st_mode as the argument: S_ISREG(stbuf.st_mode) Returns true if filename
is a regular file.
S_ISDIR(stbuf.st_mode) Returns true if filename
is a directory.
S_ISCHR(stbuf.st_mode) Returns true if filename
is a character device. Devices are shortly discussed in
more detail. S_ISBLK(stbuf.st_mode) Returns true if filename
is a block device.
S_ISFIFO(stbuf.st_mode) Returns true if filename
is a FIFO.
5.4 Obtaining Information about Files
145
S_ISLNK(stbuf.st_mode) Returns true if filename is a symbolic link. (This can never return true if stat() or fstat() were used instead of lstat().) S_ISSOCK(stbuf.st_mode)
Returns true if filename is a socket. NOTE It happens that on GNU/Linux, these macros return 1 for true and 0 for false. However, on other systems, it’s possible that they return an arbitrary nonzero value for true, instead of 1. (POSIX specifies only nonzero vs. zero.) Thus, you should always use these macros as standalone tests instead of testing the return value: if (S_ISREG(stbuf.st_mode)) ...
Correct
if (S_ISREG(stbuf.st_mode) == 1) ...
Incorrect
Along with the macros, provides two sets of bitmasks. One set is for testing permission, and the other set is for testing the type of a file. We saw the permission masks in Section 4.6, “Creating Files,” page 106, when we discussed the mode_t type and values for open() and creat(). The bitmasks, their values for GNU/Linux, and their meanings are described in Table 5.2. Several of these masks serve to isolate the different sets of bits encoded in the st_mode field: • S_IFMT represents bits 12–15, which are where the different types of files are encoded. • S_IRWXU represents bits 6–8, which are the user’s permission (read, write, execute for User). • S_IRWXG represents bits 3–5, which are the group’s permission (read, write, execute for Group). • S_IRWXO represents bits 0–2, which are the “other” permission (read, write, execute for Other). The permission and file type bits are depicted graphically in Figure 5.3.
Chapter 5 • Directories and File Metadata
146
TABLE 5.2 POSIX file-type and permission bitmasks in
Mask
Value
Meaning
S_IFMT
0170000
Bitmask for the file type bitfields.
S_IFSOCK
0140000
Socket.
S_IFLNK
0120000
Symbolic link.
S_IFREG
0100000
Regular file.
S_IFBLK
0060000
Block device.
S_IFDIR
0040000
Directory.
S_IFCHR
0020000
Character device.
S_IFIFO
0010000
FIFO.
S_ISUID
0004000
Setuid bit.
S_ISGID
0002000
Setgid bit.
S_ISVTX
0001000
Sticky bit.
S_IRWXU
0000700
Mask for owner permissions.
S_IRUSR
0000400
Owner read permission.
S_IWUSR
0000200
Owner write permission.
S_IXUSR
0000100
Owner execute permission.
S_IRWXG
0000070
Mask for group permissions.
S_IRGRP
0000040
Group read permission.
S_IWGRP
0000020
Group write permission.
S_IXGRP
0000010
Group execute permission.
S_IRWXO
0000007
Mask for permissions for others.
S_IROTH
0000004
Other read permission.
S_IWOTH
0000002
Other write permission.
S_IXOTH
0000001
Other execute permission.
5.4 Obtaining Information about Files
147
The file-type masks are standardized primarily for compatibility with older code; they should not be used directly, because such code is less readable than the corresponding macros. It happens that the macros are implemented, logically enough, with the masks, but that’s irrelevant for user-level code. 15
14
13
File type
12
11
10
9
SUID SGID SVTX
8
7
6
Owner r/w/x
5
4
3
Group r/w/x
2
1
0
Other r/w/x
FIGURE 5.3 Permission and file-type bits
The POSIX standard explicitly states that no new bitmasks will be standardized in the future and that tests for any additional kinds of file types that may be added will be available only as S_ISxxx() macros. 5.4.4.1 Device Information Because it is meant to apply to non-Unix systems as well as Unix systems, the POSIX standard doesn’t define the meaning for the dev_t type. However, it’s worthwhile to know what’s in a dev_t. When S_ISBLK(sbuf.st_mode) or S_ISCHR(sbuf.st_mode) is true, then the device information is found in the sbuf.st_rdev field. Otherwise, this field does not contain any useful information. Traditionally, Unix device files encode a major device number and a minor device number within the dev_t value. The major number distinguishes the device type, such as “disk drive” or “tape drive.” Major numbers also distinguish among different types of devices, such as SCSI disk vs. IDE disk. The minor number distinguishes the unit of that type, for example, the first disk or the second one. You can see these values with ‘ls -l’:
Chapter 5 • Directories and File Metadata
148
Show numbers for first hard disk
$ ls -l /dev/hda /dev/hda? brw-rw---1 root disk brw-rw---1 root disk brw-rw---1 root disk brw-rw---1 root disk brw-rw---1 root disk brw-rw---1 root disk brw-rw---1 root disk brw-rw---1 root disk brw-rw---1 root disk brw-rw---1 root disk
3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
$ ls -l /dev/null crw-rw-rw1 root
1,
0 1 2 3 4 5 6 7 8 9
Aug Aug Aug Aug Aug Aug Aug Aug Aug Aug
31 31 31 31 31 31 31 31 31 31
2002 2002 2002 2002 2002 2002 2002 2002 2002 2002
/dev/hda /dev/hda1 /dev/hda2 /dev/hda3 /dev/hda4 /dev/hda5 /dev/hda6 /dev/hda7 /dev/hda8 /dev/hda9
Show info for /dev/null, too root
3 Aug 31
2002 /dev/null
Instead of the file size, ls displays the major and minor numbers. In the case of the hard disk, /dev/hda represents the whole drive. /dev/hda1, /dev/hda2, and so on, represent partitions within the drive. They all share the same major device number (3), but have different minor device numbers. Note that the disk devices are block devices, whereas /dev/null is a character device. Block devices and character devices are separate entities; even if a character device and a block device share the same major device number, they are not necessarily related. The major and minor device numbers can be extracted from a dev_t value with the major() and minor() functions defined in : Common
#include #include int major(dev_t dev); int minor(dev_t dev); dev_t makedev(int major, int minor);
Major device number Minor device number Create a dev_t value
(Some systems implement them as macros.) The makedev() function goes the other way; it takes separate major and minor values and encodes them into a dev_t value. Its use is otherwise beyond the scope of this book; the morbidly curious should see mknod(2). The following program, ch05-devnum.c, shows how to use the stat() system call, the file-type test macros, and finally, the major() and minor() macros. /* ch05-devnum.c --- Demonstrate stat(), major(), minor(). */ #include #include #include #include #include
5.4 Obtaining Information about Files
149
int main(int argc, char **argv) { struct stat sbuf; char *devtype; if (argc != 2) { fprintf(stderr, "usage: %s path\n", argv[0]); exit(1); } if (stat(argv[1], & sbuf) < 0) { fprintf(stderr, "%s: stat: %s\n", argv[1], strerror(errno)); exit(1); } if (S_ISCHR(sbuf.st_mode)) devtype = "char"; else if (S_ISBLK(sbuf.st_mode)) devtype = "block"; else { fprintf(stderr, "%s is not a block or character device\n", argv[1]); exit(1); } printf("%s: major: %d, minor: %d\n", devtype, major(sbuf.st_rdev), minor(sbuf.st_rdev)); exit(0); }
Here is what happens when the program is run: $ ch05-devnum /tmp /tmp is not a block or character device $ ch05-devnum /dev/null char: major: 1, minor: 3 $ ch05-devnum /dev/hda2 block: major: 3, minor: 2
Try a nondevice Character device Block device
Fortunately, the output agrees with that of ls, giving us confidence7 that we have indeed written correct code. Reproducing the output of ls is all fine and good, but is it really useful? The answer is yes. Any application that works with file hierarchies must be able to distinguish among all the different types of files. Consider an archiver such as tar or cpio. It would be disastrous if such a program treated a disk device file as a regular file, attempting to read it and store its contents in an archive! Or consider find, which can perform
7 The technical term is a warm fuzzy.
Chapter 5 • Directories and File Metadata
150
arbitrary actions based on the type and other attributes of files it encounters. (find is a complicated program; see find(1) if you’re not familiar with it.) Or even something as simple as a disk space accounting package has to distinguish regular files from everything else. 5.4.4.2 The V7 cat Revisited In Section 4.4.4, “Example: Unix cat,” page 99, we promised to return to the V7 cat program to review its use of the stat() system call. The first group of lines that used it were these: 31 32 33 34 35 36
fstat(fileno(stdout), &statb); statb.st_mode &= S_IFMT; if (statb.st_mode!=S_IFCHR && statb.st_mode!=S_IFBLK) { dev = statb.st_dev; ino = statb.st_ino; }
This code should now make sense. Line 31 calls fstat() on the standard output to fill in the statb structure. Line 32 throws away all the information in statb.st_mode except the file type, by ANDing the mode with the S_IFMT mask. Line 33 checks that the file being used for standard output is not a device file. In that case, the program saves the device and inode numbers in dev and ino. These values are then checked for each input file in lines 50–56: 50 51 52 53 54 55 56
fstat(fileno(fi), &statb); if (statb.st_dev==dev && statb.st_ino==ino) { fprintf(stderr, "cat: input %s is output\n", fflg?"-": *argv); fclose(fi); continue; }
If an input file’s st_dev and st_ino values match those of the output file, then cat complains and continues to the next file named on the command line. The check is done unconditionally, even though dev and ino are set only if the output is not a device file. This works out OK, because of how those variables are declared: 17
int dev, ino = -1;
5.4 Obtaining Information about Files
151
Since ino is initialized to -1, no valid inode number will ever be equal to it.8 That dev is not so initialized is sloppy, but not a problem, since the test on line 51 requires that both the device and inode be equal. (A good compiler will complain that dev is used without being initialized: ‘gcc -Wall’ does.) Note also that neither call to fstat() is checked for errors. This too is sloppy, although less so; it is unlikely that fstat() will fail on a valid file descriptor. The test for input file equals output file is done only for nondevice files. This makes it possible to use cat to copy input from device files to themselves, such as with terminals: $ tty /dev/pts/3 $ cat /dev/pts/3 > /dev/pts/3 this is a line of text this is a line of text
Print current terminal device name Copy keyboard input to screen Type in a line cat repeats it
5.4.5 Working with Symbolic Links In general, symbolic links act like hard links; file operations such as open() and apply to the pointed-to file instead of to the symbolic link itself. However, there are times when it really is necessary to work with the symbolic link instead of with the file the link points to. stat()
For this reason, the lstat() system call exists. It behaves exactly like stat(), but if the file being checked happens to be a symbolic link, then the information returned applies to the symbolic link, and not to the pointed-to file. Specifically: • S_ISLNK(sbuf.st_mode) will be true. • sbuf.st_size is the number of bytes used by the name of the pointed-to file. We already saw that the symlink() system call creates a symbolic link. But given an existing symbolic link, how can we retrieve the name of the file it points to? (ls obviously can, so we ought to be able to also.) Opening the link with open() in order to read it with read() won’t work; open() follows the link to the pointed-to file. Symbolic links thus necessitate an additional system call, named readlink():
8 This statement was true for V7; there are no such guarantees on modern systems.
Chapter 5 • Directories and File Metadata
152
POSIX
#include int readlink(const char *path, char *buf, size_t bufsiz);
readlink() places the contents of the symbolic link named by path into the buffer pointed to by buf. No more than bufsiz characters are copied. The return value is
the number of characters placed in buf or -1 if an error occurred. readlink() does not supply the trailing zero byte. Note that if the buffer passed in to readlink() is too small, you will lose information; the full name of the pointed-to file won’t be available. To properly use readlink(), your code should do the following: 1. Use lstat() to verify that you have a symbolic link. 2. Make sure that your buffer to hold the link contents is at least ‘sbuf.st_size + 1’ bytes big; the ‘+ 1’ is for the trailing zero byte to turn the buffer into a usable C string. 3. Call readlink(). It doesn’t hurt to verify that the returned value is the same as sbuf.st_size. 4. Assign '\0' to the byte after the contents of the link, to make it into a C string. Code to do all that would look something like this: /* Error checking omitted for brevity */ int count; char linkfile[PATH_MAX], realfile[PATH_MAX]; strut stat sbuf;
/* PATH_MAX is in */
… fill in linkfile with path to symbolic link of interest … lstat(linkfile, & sbuf); if (! S_ISLNK(sbuf.st_mode)) /* not a symbolic link, handle it */ if (sbuf.st_size + 1 > PATH_MAX) /* handle buffer size problems */
Get stat information Check that it’s a symlink
count = readlink(linkfile, realfile, PATH_MAX); if (count != sbuf.st_size) /* something weird going on, handle it */
Read the link
realfile[count] = '\0';
Make it into a C string
Check buffer size
This example uses fixed-size buffers for simplicity of presentation. Real code would use malloc() to allocate a buffer of the correct size since the fixed-size arrays might be too small. The file lib/xreadlink.c in the GNU Coreutils does just this. It reads
5.4 Obtaining Information about Files
153
the contents of a symbolic link into storage allocated by malloc(). We show here just the function; most of the file is boilerplate definitions. Line numbers are relative to the start of the file: 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92
/* Call readlink to get the symbolic link value of FILENAME. Return a pointer to that NUL-terminated string in malloc'd storage. If readlink fails, return NULL (caller may use errno to diagnose). If realloc fails, or if the link value is longer than SIZE_MAX :-), give a diagnostic and exit. */ char * xreadlink (char const *filename) { /* The initial buffer size for the link value. A power of 2 detects arithmetic overflow earlier, but is not required. size_t buf_size = 128;
*/
while (1) { char *buffer = xmalloc (buf_size); ssize_t link_length = readlink (filename, buffer, buf_size); if (link_length < 0) { int saved_errno = errno; free (buffer); errno = saved_errno; return NULL; } if ((size_t) link_length < buf_size) { buffer[link_length] = 0; return buffer; } free (buffer); buf_size *= 2; if (SSIZE_MAX < buf_size || (SIZE_MAX / 2 < SSIZE_MAX && buf_size == 0)) xalloc_die (); } }
The function body consists of an infinite loop (lines 68–91), broken at line 84 which returns the allocated buffer. The loop starts by allocating an initial buffer (line 70) and reading the link (line 71). Lines 73–79 handle the error case, saving and restoring errno so that it can be used correctly by the calling code. Lines 81–85 handle the “success” case, in which the link’s contents’ length is smaller than the buffer size. In this case, the terminating zero is supplied (line 83) and then the
154
Chapter 5 • Directories and File Metadata
buffer returned (line 84), breaking the infinite loop. This ensures that the entire link contents have been placed into the buffer, since readlink() has no way to indicate “insufficient space in buffer.” Lines 87–88 free the buffer and double the buffer size for the next try at the top of the loop. Lines 89–90 handle the case in which the link’s size is too big: buf_size is greater than SSIZE_MAX, or SSIZE_MAX is larger than the value that can be represented in a signed integer of the same size as used to hold SIZE_MAX and buf_size has wrapped around to zero. (These are unlikely conditions, but strange things do happen.) If either condition is true, the program dies with an error message. Otherwise, the function continues around to the top of the loop to make another try at allocating a buffer and reading the link. Some further explanation: The ‘SIZE_MAX / 2 < SSIZE_MAX’ condition is true only on systems on which ‘SIZE_MAX < 2 * SSIZE_MAX’; we don’t know of any, but only on such a system can buf_size wrap around to zero. Since in practice this condition can’t be true, the compiler can optimize away the whole expression, including the following ‘buf_size == 0’ test. After reading this code, you might ask, “Why not use lstat() to retrieve the size of the symbolic link, allocate a buffer of the right size with malloc(), and be done?” Well, there are a number of reasons.9 • lstat() is a system call—it’s best to avoid the overhead of making it since the contents of most symbolic links will fit in the initial buffer size of 128. • Calling lstat() introduces a race condition: The link could change between the execution of lstat() and readlink(), forcing the need to iterate anyway. • Some systems don’t properly fill in the st_size member for symbolic links. (Sad, but true.) In a similar fashion, as we see in Section 8.4.2, “Getting the Current Directory: getcwd(),” page 258, Linux provides special symbolic links under /proc whose st_size is zero, but for which readlink() does return valid content. Finally, when the buffer isn’t big enough, xreadlink() uses free() and malloc() with a bigger size, instead of realloc(), to avoid the useless copying that realloc()
9 Thanks to Jim Meyering for explaining the issues.
5.5 Changing Ownership, Permission, and Modification Times
155
does. (The comment on line 58 is thus out of date since realloc() isn’t being used; this is fixed in the post-5.0 version of the Coreutils.)
5.5 Changing Ownership, Permission, and Modification Times Several additional system calls let you change other file-related information: in particular, the owner and group of a file, the file’s permissions, and the file’s access and modification times.
5.5.1 Changing File Ownership: chown(), fchown(), and lchown() File ownership and group are changed with three similar system calls: #include #include
POSIX
int chown(const char *path, uid_t owner, gid_t group); int fchown(int fd, uid_t owner, gid_t group); int lchown(const char *path, uid_t owner, gid_t group);
chown() works on a pathname argument, fchown() works on an open file, and lchown() works on symbolic links instead of on the files pointed to by symbolic links. In all other respects, the three calls work identically, returning 0 on success and -1
on error. It is noteworthy that one system call changes both the owner and group of a file. To change only the owner or only the group, pass in a value of -1 for the ID number that is to be left unchanged. While you might think that you could pass in the corresponding value from a previously retrieved struct stat for the file or file descriptor, that method is more error prone. There’s a race condition: The owner or group could have changed between the call to stat() and the call to chown(). You might wonder, “Why be able to change ownership of a symbolic link? The permissions and ownership on them don’t matter.” But what happens if a user leaves, but all his files are still needed? It’s necessary to be able to change the ownership on all the person’s files to someone else, including symbolic links. GNU/Linux systems normally do not permit ordinary (non-root) users to change the ownership of (“give away”) their files. Changing the group to one of the user’s groups is allowed, of course. The restriction on changing owners follows BSD systems,
Chapter 5 • Directories and File Metadata
156
which also have this prohibition. The primary reason is that allowing users to give away files can defeat disk accounting. Consider a scenario like this: $ $ $ $ $ $
mkdir mywork chmod go-rwx mywork cd mywork myprogram > large_data_file chmod ugo+rw large_data_file chown otherguy large_data_file
Make a directory Set permissions to drwx-----Go there Create a large file Set permissions to -rw-rw-rwGive file away to otherguy
In this example, large_data_file now belongs to user otherguy. The original user can continue to read and write the file, because of the permissions. But otherguy will be charged for the disk space it occupies. However, since it’s in a directory that belongs to the original user, which cannot be accessed by otherguy, there is no way for otherguy to remove the file. Some System V systems do allow users to give away files. (Setuid and setgid files have the corresponding bit removed when the owner is changed.) This can be a particular problem when files are extracted from a .tar or .cpio archive; the extracted files end up belonging to the UID or GID encoded in the archive. On such systems, the tar and cpio programs have options that prevent this, but it’s important to know that chown()’s behavior does vary across systems. We will see in Section 6.3, “User and Group Names,” page 195, how to relate user and group names to their corresponding numeric values.
5.5.2 Changing Permissions: chmod() and fchmod() After all the discussion in Chapter 4, “Files and File I/O,” page 83, and in this chapter, changing permissions is almost anticlimatic. It’s done with one of two system calls, chmod() and fchmod(): #include #include
POSIX
int chmod(const char *path, mode_t mode); int fchmod(int fildes, mode_t mode);
works on a pathname argument, and fchmod() works on an open file. (There is no lchmod() call in POSIX, since the system ignores the permission settings on symbolic links. Some systems do have such a call, though.) As with most other system calls, these return 0 on success and -1 on failure. Only the file’s owner or root can change a file’s permissions. chmod()
5.5 Changing Ownership, Permission, and Modification Times
157
The mode value is created in the same way as for open() and creat(), as discussed in Section 4.6, “Creating Files,” page 106. See also Table 5.2, which lists the permission constants. The system will not allow setting the setgid bit (S_ISGID) if the group of the file does not match the effective group ID of the process or one of its supplementary groups. (We have not yet discussed these issues in detail; see Section 11.1.1, “Real and Effective IDs,” page 405.) Of course, this check does not apply to root or to code running as root.
5.5.3 Changing Timestamps: utime() The struct stat structure contains three fields of type time_t: st_atime
The time the file was last accessed (read).
st_mtime
The time the file was last modified (written).
st_ctime
The time the file’s inode was last changed (for example, renamed).
A time_t value represents time in “seconds since the Epoch.” The Epoch is the Beginning of Time for computer systems. GNU/Linux and Unix use Midnight, January 1, 1970 UTC10 as the Epoch. Microsoft Windows systems use Midnight January 1, 1980 (local time, apparently) as the Epoch. values are sometimes referred to as timestamps. In Section 6.1, “Times and Dates,” page 166, we look at how these values are obtained and at how they’re used. For now, it’s enough to know what a time_t value is and that it represents seconds since the Epoch. time_t
The utime() system call allows you to change a file’s access and modification timestamps: #include #include
POSIX
int utime(const char *filename, struct utimbuf *buf);
A struct utimbuf looks like this: 10 UTC is a language-independent acronym for Coordinated Universal Time. Older code (and sometimes older
people) refer to this as “Greenwich Mean Time” (GMT), which is the time in Greenwich, England. When time zones came into widespread use, Greenwich was chosen as the location to which all other time zones are relative, either behind it or ahead of it.
Chapter 5 • Directories and File Metadata
158
struct utimbuf { time_t actime; /* access time */ time_t modtime; /* modification time */ };
If the call is successful, it returns 0; otherwise, it returns -1. If buf is NULL, then the system sets both the access time and the modification time to the current time. To change one time but not the other, use the original value from the struct stat. For example: /* Error checking omitted for brevity */ struct stat sbuf; struct utimbuf ut; time_t now; time(& now); stat("/some/file", & sbuf); ut.actime = sbuf.st_atime;
Get current time of day, see next chapter Fill in sbuf Access time unchanged
ut.modtime = now - (24 * 60 * 60);
Set modtime to 24 hours ago
utime("/some/file", & ut);
Set the values
About now, you may be asking yourself, “Why would anyone want to change a file’s access and modification times?” Good question. To answer it, consider the case of a program that creates backup archives, such as tar or cpio. These programs have to read the contents of a file in order to archive them. Reading the file, of course, changes the file’s access time. However, that file might not have been read by a human in 10 years. Someone doing an ‘ls -lu’, which displays the access time (instead of the default modification time), should see that the last time the file was read was 10 years ago. Thus, the backup program should save the original access and modification times, read the file in order to archive it, and then restore the original times with utime(). Similarly, consider the case of an archiving program restoring a file from an archive. The archive stores the file’s original access and modification times. However, when a file is extracted from an archive to a newly created copy on disk, the new file has the current date and time of day for its access and modification times. However, it’s more useful if the newly created file looks as if it’s the same age as the original file in the archive. Thus, the archiver needs to be able to set the access and modification times to those stored in the archive.
5.5 Changing Ownership, Permission, and Modification Times
159
NOTE In new code, you may wish to use the utimes() call (note the s in the name), which is described later in the book, in Section 14.3.2, “Microsecond File Times: utimes(),” page 545.
5.5.3.1 Faking utime(file, NULL) Some older systems don’t set the access and modification times to the current time when the second argument to utime() is NULL. Yet, higher-level code (such as GNU touch) is simpler and more straightforward if it can rely on a single standardized interface. The GNU Coreutils library thus contains a replacement function for utime() that handles this case, which can then be called by higher-level code. This reflects the “pick the best interface for the job” design principle we described in Section 1.5, “Portability Revisited,” page 19. The replacement function is in the file lib/utime.c in the Coreutils distribution. The following code is the version from Coreutils 5.0. Line numbers are relative to the start of the file: 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46
#include #ifdef HAVE_UTIME_H # include #endif #include "full-write.h" #include "safe-read.h" /* Some systems (even some that do have ) don't declare this structure anywhere. */ #ifndef HAVE_STRUCT_UTIMBUF struct utimbuf { long actime; long modtime; }; #endif /* Emulate utime (file, NULL) for systems (like 4.3BSD) that do not interpret it to set the access and modification times of FILE to the current time. Return 0 if successful, -1 if not. */
Chapter 5 • Directories and File Metadata
160
47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82
static int utime_null (const char *file) { #if HAVE_UTIMES_NULL return utimes (file, 0); #else int fd; char c; int status = 0; struct stat sb; fd = open (file, O_RDWR); if (fd < 0 || fstat (fd, &sb) < 0 || safe_read (fd, &c, sizeof c) == SAFE_READ_ERROR || lseek (fd, (off_t) 0, SEEK_SET) < 0 || full_write (fd, &c, sizeof c) != sizeof c /* Maybe do this -- it's necessary on SunOS4.1.3 with some combination of patches, but that system doesn't use this code: it has utimes. || fsync (fd) < 0 */ || (st.st_size == 0 && ftruncate (fd, st.st_size) < 0) || close (fd) < 0) status = -1; return status; #endif } int rpl_utime (const char *file, const struct utimbuf *times) { if (times) return utime (file, times); return utime_null (file); }
Lines 33–41 define the struct utimbuf; as the comment says, some systems don’t declare the structure. The utime_null() function does the work. If the utimes() system call is available, it is used. (utimes() is a similar, but more advanced, system call, which is covered in Section 14.3.2, “Microsecond File Times: utimes(),” page 545. It also allows NULL for the second argument, meaning use the current time.) In the case that the times must be updated manually, the code does the update by first reading a byte from the file, and then writing it back. (The original Unix touch worked this way.) The operations are as follows: 1. Open the file, line 58. 2. Call stat() on the file, line 60.
5.5 Changing Ownership, Permission, and Modification Times
161
3. Read one byte, line 61. For our purposes, safe_read() acts like read(); it’s explained in Section 10.4.4, “Restartable System Calls,” page 357. 4. Seek back to the front of the file with lseek(), line 62. This is done to write the just-read byte back on top of itself. 5. Write the byte back, line 63. full_write() acts like write(); it is also covered in Section 10.4.4, “Restartable System Calls,” page 357. 6. If the file is of zero size, use ftruncate() to set it to zero size (line 68). This doesn’t change the file, but it has the side effect of updating the access and modification times. (ftruncate() was described in Section 4.8, “Setting File Length,” page 114.) 7. Close the file, line 69. These steps are all done in one long successive chain of tests, inside an if. The tests are set up so that if any operation fails, utime_null() returns -1, like a regular system call. errno is automatically set by the system, for use by higher-level code. The rpl_utime() function (lines 75–82) is the “replacement utime().” If the second argument is not NULL, then it calls the real utime(). Otherwise, it calls utime_null().
5.5.4 Using fchown() and fchmod() for Security The original Unix systems had only chown() and chmod() system calls. However, on heavily loaded systems, these system calls are subject to race conditions, by which an attacker could arrange to replace with a different file the file whose ownership or permissions were being changed. However, once a file is opened, race conditions aren’t an issue anymore. A program can use stat() on a pathname to obtain information about the file. If the information is what’s expected, then after the file is opened, fstat() can verify that the file is the same (by comparing the st_dev and st_ino fields of the “before” and “after” struct stat structures). Once the program knows that the files are the same, the ownership or permissions can then be changed with fchown() or fchmod().
162
Chapter 5 • Directories and File Metadata
These system calls, as well as lchown(), are of relatively recent vintage;11 older Unix systems won’t have them, although modern, POSIX-compliant systems do. There are no corresponding futime() or lutime() functions. In the case of futime(), this is (apparently) because the file timestamps are not critical to system security in the same way that ownership and permissions are. There is no lutime(), since the timestamps are irrelevant for symbolic links.
5.6 Summary • The file and directory hierarchy as seen by the user is one logical tree, rooted at /. It is made up of one or more storage partitions, each of which contains a filesystem. Within a filesystem, inodes store information about files (metadata), including the location of file data blocks. • Directories make the association between filenames and inodes. Conceptually, directory contents are just sequences of (inode, name) pairs. Each directory entry for a file is called a (hard) link, and files can have many links. Hard links, because they work only by inode number, must all be on the same filesystem. Symbolic (soft) links are pointers to files or directories that work based on filename, not inode number, and thus are not restricted to being on the same filesystem. • Hard links are created with link(), symbolic links are created with symlink(), links are removed with unlink(), and files are renamed (possibly being moved to another directory) with rename(). A file’s data blocks are not reclaimed until the link count goes to zero and the last open file descriptor for the file is closed. • Directories are created with mkdir() and removed with rmdir(); a directory must be empty (nothing left but ‘.’ and ‘..’) before it can be removed. The GNU/Linux version of the ISO C remove() function calls unlink() or rmdir() as appropriate. • Directories are processed with the opendir(), readdir(), rewinddir(), and closedir() functions. A struct dirent contains the inode number and the file’s name. Maximally portable code uses only the filename in the d_name member. The BSD telldir() and seekdir() functions for saving and restoring the
11 fchown() and fchmod() were introduced in 4.2 BSD but not picked up for System V until System V Release 4.
5.7 Exercises
163
current position in a directory are widely available but are not as fully portable as the other directory processing functions. • File metadata are retrieved with the stat() family of system calls; the struct stat structure contains all the information about a file except the filename. (Indeed, since a file may have many names or may even be completely unlinked, it’s not possible to make the name available.) • The S_ISxxx() macros in make it possible to determine a file’s type. The major() and minor() functions from make it possible to decode the dev_t values that represent block and character devices. • Symbolic links can be checked for using lstat(), and the st_size field of the struct stat for a symbolic link returns the number of bytes needed to hold the name of the pointed-to file. The contents of a symbolic link are read with readlink(). Care must be taken to get the buffer size correct and to terminate the retrieved filename with a trailing zero byte so that it can be used as a C string. • Several miscellaneous system calls update other information: the chown() family for the owner and group, the chmod() routines for the file permissions, and utime() to change file access and modification times.
Exercises 1. Write a routine ‘const char *fmt_mode(mode_t mode)’. The input is a mode_t value as provided by the st_mode field in the struct stat; that is, it contains both the permission bits and the file type. The output should be a 10-character string identical to the first field of output from ‘ls -l’. In other words, the first character identifies the file type, and the other nine the permissions. When the S_ISUID and S_IXUSR bits are set, use an s instead of an x; if only the I_ISUID bit is set, use an S. Similarly for the S_ISGID and S_IXGRP bits. If both the S_ISVTX and S_IXOTH bits are set, use t; for S_ISVTX alone, use T. For simplicity, you may use a static buffer whose contents are overwritten each time the routine is called.
164
Chapter 5 • Directories and File Metadata
2. Extend ch05-catdir.c to call stat() on each file name found. Then print the inode number, the result of fmt_mode(), the link count, and the file’s name. 3. Extend ch05-catdir.c further such that if a file is a symbolic link, it will also print the name of the pointed-to file. 4. Add an option such that if a filename is that of a subdirectory, the program recursively enters the subdirectory and prints information about the subdirectory’s files (and directories). Only one level of recursion is needed. 5. If you’re not using a GNU/Linux system, run ch05-trymkdir (see Section 5.2, “Creating and Removing Directories,” page 130) on your system and compare the results to those we showed. 6. Write the mkdir program. See your local mkdir(1) manpage and implement all its options. 7. In the root directory, /, both the device and inode numbers for ‘.’ and ‘..’ are the same. Using this bit of information, write the pwd program. The program has to start by finding the name of the current directory by reading the contents of the parent directory. It must then continue, working its way up the filesystem hierarchy, until it reaches the root directory. Printing the directory name backwards, from the current directory up to the root, is easy. How will your version of pwd manage to print the directory name in the correct way, from the root on down? 8. If you wrote pwd using recursion, write it again, using iteration. If you used iteration, write it using recursion. Which is better? (Hint: consider very deeply nested directory trees.) 9. Examine the rpl_utime() function (see Section 5.5.3.1, “Faking utime(file, NULL),” page 159) closely. What resource is not recovered if one of the tests in the middle of the if fails? (Thanks to Geoff Collyer.) 10. (Hard.) Read the chmod(1) manpage. Write code to parse the symbolic options argument, which allows adding, removing, and setting permissions based on user, group, other, and “all.” Once you believe it works, write your own version of chmod that applies the permission specification to each file or directory named on the command line. Which function did you use, chmod()—or open() and fchmod()—and why?
6 General Library Interfaces — Part 1
In this chapter 6.1 Times and Dates
page 166
6.2 Sorting and Searching Functions
page 181
6.3 User and Group Names
page 195
6.4 Terminals: isatty()
page 202
6.5 Suggested Reading
page 203
6.6 Summary
page 203
Exercises
page 205
165
e saw in Chapter 5, “Directories and File Metadata,” page 117, that directly reading a directory returns filenames in the order in which they’re kept in the directory. We also saw that the struct stat contains all the information about a file, except its name. However, some components of that structure are not directly usable; they’re just numeric values.
W
This chapter presents the rest of the APIs needed to make full use of the struct stat component values. In order, we cover the following topics: time_t values for representing times and the time formatting function; sorting and searching functions (for sorting filenames, or any other data); the uid_t and gid_t types for representing users and groups and the functions that map them to and from the corresponding user and group names; and finally, a function to test whether a file descriptor represents a terminal.
6.1 Times and Dates Time values are kept in the type known as time_t. The ISO C standard guarantees that this is a numeric type but does not otherwise specify what it is (integer or floatingpoint), or the range or the precision of the values stored therein. On GNU/Linux and Unix systems, time_t values represent “seconds since the Epoch.” The Epoch is the beginning of recorded time, which is Midnight, January 1, 1970, UTC. On most systems, a time_t is a C long int. For 32-bit systems, this means that the time_t “overflows” sometime on January 19, 2038. By then, we hope, the time_t type will be redefined to be at least 64 bits big. Various functions exist to retrieve the current time, compute the difference between two time_t values, convert time_t values into a more usable representation, and format both representations as character strings. Additionally, a date and time representation can be converted back into a time_t, and limited time-zone information is available. A separate set of functions provides access to the current time with a higher resolution than one second. The functions work by providing two discrete values: the time as seconds since the Epoch, and the number of microseconds within the current second. These functions are described later in the book, in Section 14.3.1, “Microsecond Times: gettimeofday(),” page 544.
166
6.1 Times and Dates
167
6.1.1 Retrieving the Current Time: time() and difftime() The time() system call retrieves the current date and time; difftime() computes the difference between two time_t values: ISO C
#include time_t time(time_t *t); double difftime(time_t time1, time_t time0);
returns the current time. If the t parameter is not NULL, then the value pointed to by t is also filled in with the current time. It returns (time_t) -1 if there was an error, and errno is set. time()
Although ISO C doesn’t specify what’s in a time_t value, POSIX does indicate that it represents time in seconds. Thus, it’s both common and portable to make this assumption. For example, to see if a time value represents something that is six months or more in the past, one might use code like this: /* Error checking omitted for brevity */ time_t now, then, some_time; time(& now); then = now - (6L * 31 * 24 * 60 * 60);
Get current time Approximately six months ago
… set some_time, for example, via stat() … if (some_time < then) /* more than 6 months in the past */ else /* less than 6 months in the past */
However, since strictly portable code may need to run on non-POSIX systems, the produce the difference between two times. The same be written this way:
difftime() function exists to test, using difftime(), would
time_t now, some_value; const double six_months = 6.0 * 31 * 24 * 60 * 60; time(& now);
Get current time
… set some_time, for example, via stat() … if (difftime(now, some_time) >= six_months) /* more than 6 months in the past */ else /* less than 6 months in the past */
The return type of difftime() is a double because a time_t could possibly represent fractions of a second as well. On POSIX systems, it always represents whole seconds.
Chapter 6 • General Library Interfaces — Part 1
168
In both of the preceding examples, note the use of typed constants to force the computation to be done with the right type of math: 6L in the first instance for long integers, 6.0 in the second, for floating point.
6.1.2 Breaking Down Times: gmtime() and localtime() In practice, the “seconds since the Epoch” form of a date and time isn’t very useful except for simple comparisons. Computing the components of a time yourself, such as the month, day, year, and so on, is error prone, since the local time zone (possibly with daylight-saving time) must be taken into account, leap years must be computed correctly, and so forth. Fortunately, two standard routines do this job for you: ISO C
#include struct tm *gmtime(const time_t *timep); struct tm *localtime(const time_t *timep);
gmtime() returns a pointer to a struct tm that represents UTC time. localtime() returns a pointer to a struct tm representing the local time; that is, it takes the current
time zone and daylight-saving time into account. In effect, this is “wall-clock time,” the date and time as it would be displayed on a wall clock or on a wristwatch. (How this works is discussed later, see Section 6.1.5, “Getting Time-Zone Information,” page 178.) Both functions return a pointer to a struct tm, which looks like this: struct tm { int tm_sec; int tm_min; int tm_hour; int tm_mday; int tm_mon; int tm_year; int tm_wday; int tm_yday; int tm_isdst; };
/* /* /* /* /* /* /* /* /*
seconds */ minutes */ hours */ day of the month */ month */ year */ day of the week */ day in the year */ daylight saving time */
The struct tm is referred to as a broken-down time, since the time_t value is “broken down” into its component parts. The component parts, their ranges, and their meanings are shown in Table 6.1.
6.1 Times and Dates
169
TABLE 6.1 Fields in the struct tm
Member
Range
Meaning
tm_sec
0–60
Second within a minute. Second 60 allows for leap seconds. (C89 had the range as 0–61.)
tm_min
0–59
Minute within an hour.
tm_hour
0–23
Hour within the day.
tm_mday
1–31
Day of the month.
tm_mon
0–11
Month of the year.
tm_year
0–N
Year, in years since 1900.
tm_wday
0–6
Day of week, Sunday = 0.
tm_yday
0–365
Day of year, January 1 = 0.
tm_isdst
< 0, 0, > 0
Daylight Savings Time flag.
The ISO C standard presents most of these values as “x since y.” For example, tm_sec is “seconds since the minute,” tm_mon is “months since January,” tm_wday is “days since Sunday,” and so on. This helps to understand why all the values start at 0. (The single exception, logically enough, is tm_mday, the day of the month, which ranges from 1–31.) Of course, having them start at zero is also practical; since C arrays are zero-based, it makes using these values as indices trivial: static const char *const days[] = { "Sunday", "Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday", }; time_t now; struct tm *curtime; time(& now); curtime = gmtime(& now); printf("Day of the week: %s\n", days[curtime->tm_wday]);
Array of day names
Get current time Break it down Index and print
Both gmtime() and localtime() return a pointer to a struct tm. The pointer points to a static struct tm maintained by each routine, and it is likely that these struct tm structures are overwritten each time the routines are called. Thus, it’s a good idea to make a copy of the returned struct. Reusing the previous example:
Chapter 6 • General Library Interfaces — Part 1
170
static const char *const days[] = { /* As before */ }; time_t now; struct tm curtime;
Structure, not pointer
time(& now); curtime = *gmtime(& now); printf("Day of the week: %s\n", days[curtime.tm_wday]);
Get current time Break it down and copy data Index and print, use . not ->
The tm_isdst field indicates whether or not daylight-saving time (DST) is currently in effect. A value of 0 means DST is not in effect, a positive value means it is, and a negative value means that no DST information is available. (The C standard is purposely vague, indicating only zero, positive, or negative; this gives implementors the most freedom.)
6.1.3 Formatting Dates and Times The examples in the previous section showed how the fields in a struct tm could be used to index arrays of character strings for printing informative date and time values. While you could write your own code to use such arrays for formatting dates and times, standard routines alleviate the work. 6.1.3.1 Simple Time Formatting: asctime() and ctime() The first two standard routines, listed below, produce output in a fixed format: #include
ISO C
char *asctime(const struct tm *tm); char *ctime(const time_t *timep);
As with gmtime() and localtime(), asctime() and ctime() return pointers to static buffers that are likely to be overwritten upon each call. Furthermore, these two routines return strings in the same format. They differ only in the kind of argument they accept. asctime() and ctime() should be used when all you need is simple date and time information: #include #include int main(void) { time_t now; time(& now); printf("%s", ctime(& now)); }
6.1 Times and Dates
171
When run, this program produces output of the form: ‘Thu May 22 15:44:21 2003’. The terminating newline is included in the result. To be more precise, the return value points to an array of 26 characters, as shown in Figure 6.1. 0
1
2
3
T h u
4
5
6
7
M a y
8
9
2 2
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
1 5 : 4 4 : 2 1
2 0 0 3 \n \0
FIGURE 6.1 Return string from ctime() and asctime()
Much older Unix code relies on the fact that the values have a fixed position in the returned string. When using these routines, remember that they include a trailing newline. Thus, the small example program uses a simple "%s" format string for printf(), and not "%s\n", as might be expected. ctime()
saves you the step of calling localtime(); it’s essentially equivalent to
time_t now; char *curtime; time(& now); curtime = asctime(localtime(& now));
6.1.3.2 Complex Time Formatting: strftime() While asctime() and ctime() are often adequate, they are also limited: • The output format is fixed. There’s no way to rearrange the order of the elements. • The output does not include time-zone information. • The output uses abbreviated month and day names. • The output assumes English names for the months and days. For these reasons, C89 introduced the strftime() standard library routine: #include
ISO C
size_t strftime(char *s, size_t max, const char *format, const struct tm *tm);
strftime()
is similar to sprintf(). The arguments are as follows:
char *s
A buffer to hold the formatted string.
Chapter 6 • General Library Interfaces — Part 1
172
size_t max
The size of the buffer. const char *format
The format string. const struct tm *tm A struct tm pointer
representing the broken-down time to be formatted.
The format string contains literal characters, intermixed with conversion specifiers that indicate what is to be placed into the string, such as the full weekday name, the hour according to a 24-hour or 12-hour clock, a.m. or p.m. designations, and so on. (Examples coming shortly.) If the entire string can be formatted within max characters, the return value is the number of characters placed in s, not including the terminating zero byte. Otherwise, the return value is 0. In the latter case, the contents of s are “indeterminate.” The following simple example gives the flavor of how strftime() is used: #include #include int main(void) { char buf[100]; time_t now; struct tm *curtime; time(& now); curtime = localtime(& now); (void) strftime(buf, sizeof buf, "It is now %A, %B %d, %Y, %I:%M %p", curtime); printf("%s\n", buf); exit(0); }
When run, this program prints something like: It is now Thursday, May 22, 2003, 04:15 PM
Table 6.2 provides the full list of conversion specifiers, their possible alternative representations, and their meanings. In addition, the C99 standard added more specifiers to the list; those that are new in C99 are marked with a ✓ symbol.
6.1 Times and Dates
173
TABLE 6.2 strftime() conversion format specifiers
Specifier(s)
C99 Meaning
%a
The locale’s abbreviated weekday name.
%A
The locale’s full weekday name.
%b
The locale’s abbreviated month name.
%B
The locale’s full month name.
%c, %Ec
The locale’s “appropriate” date and time representation.
%C, %EC
✓
%d, %Od
The century (00–99). The day of the month (01–31).
%D
✓
Same as %m/%d/%y.
%e, %Oe
✓
The day of the month. A single digit is preceded with a space (1–31).
%F
✓
Same as %Y-%m-%d (ISO 8601 date format).
%g
✓
The last two digits of week-based year (00–99).
%G
✓
The ISO 8601 week-based year.
%h
✓
Same as %b.
%H, %OH
The hour in a 24-hour clock (00–23).
%I, %OI
The hour in a 12-hour clock (01–12).
%j
The day of the year (001–366).
%m, %Om
The month as a number (01–12).
%M, %OM
The minute as a number (00–59).
%n
✓
A newline character ('\n'). The locale’s a.m./p.m. designation.
%p %r
✓
The locale’s 12-hour clock time.
%R
✓
Same as %H:%M.
%S, %OS
The second as a number (00–60).
%t
✓
A TAB character ('\t').
%T
✓
Same as %H:%M:%S (ISO 8601 time format).
%u, %Ou
✓
ISO 8601 weekday number, Monday = 1 (1–7).
%U, %OU
Week number, first Sunday is first day of week 1 (00–53).
Chapter 6 • General Library Interfaces — Part 1
174
TABLE 6.2 (Continued)
Specifier(s) %V, %OV
C99 Meaning ✓
ISO 8601 week number (01–53).
%w, %Ow
The weekday as a number, Sunday = 0 (0–6).
%W, %OW
Week number, first Monday is first day of week 1 (00–53).
%x, %Ex
The locale’s “appropriate” date representation.
%X, %EX
The locale’s “appropriate” time representation.
%y, %Ey, %Oy
The last two digits of the year (00–99).
%Y, %EY
The year as a number.
%Z
The locale’s time zone, or no characters if no time-zone information is available.
%%
A single %.
A locale is a way of describing the current location, taking into account such things as language, character set, and defaults for formatting dates, times, and monetary amounts, and so on. We deal with them in Chapter 13, “Internationalization and Localization,” page 485. For now, it’s enough to understand that the results from strftime() for the same format string can vary, according to the current locale. The versions starting with %E and %O are for “alternative representations.” Some locales have multiple ways of representing the same thing; these specifiers provide access to the additional representations. If a particular locale does not support alternative representations, then strftime() uses the regular version. Many Unix versions of date allow you to provide, on the command line, a format string that begins with a + character. date then formats the current date and time and prints it according to the format string: $ date +'It is now %A, %B %d, %Y, %I:%M %p' It is now Sunday, May 25, 2003, 06:44 PM
Most of the new C99 specifiers come from such existing Unix date implementations. The %n and %t formats are not strictly necessary in C, since the TAB and newline characters can be directly embedded in the string. However, in the context of a date format string on the command line, they make more sense. Thus, they’re included in the specification for strftime() as well.
6.1 Times and Dates
175
The ISO 8601 standard defines (among other things) how weeks are numbered within a year. According to this standard, weeks run Monday through Sunday, and Monday is day 1 of the week, not day 0. If the week in which January 1 comes out contains at least four days in the new year, then it is considered to be week 1. Otherwise, that week is the last week of the previous year, numbered 52 or 53. These rules are used for the computation of the %g, %G, and %V format specifiers. (While parochial Americans such as the author may find these rules strange, they are commonly used throughout Europe.) Many of the format specifiers produce results that are specific to the current locale. In addition, several indicate that they produce the “appropriate” representation for the locale (for example, %x). The C99 standard defines the values for the "C" locale. These values are listed in Table 6.3. TABLE 6.3 "C" locale values for certain strftime() formats
Specifier
Meaning
%a
The first three characters of %A.
%A
One of Sunday, Monday, …, Saturday.
%b
The first three characters of %B.
%B
One of January, February, …, December.
%c
Same as %a %b %e %T %Y.
%p
One of AM or PM.
%r
Same as %I:%M:%S %p.
%x
Same as %m/%d/%y.
%X
Same as %T.
%Z
Implementation-defined.
It should be obvious that strftime() provides considerable flexibility and control over date- and time-related output, in much the same way as printf() and sprintf() do. Furthermore, strftime() cannot overflow its buffer, since it checks against the passed-in size parameter, making it a safer routine than is sprintf().
Chapter 6 • General Library Interfaces — Part 1
176
As a simple example, consider the creation of program log files, when a new file is created every hour. The filename should embed the date and time of its creation in its name: /* Error checking omitted for brevity */ char fname[PATH_MAX]; /* PATH_MAX is in */ time_t now; struct tm *tm; int fd; time(& now); tm = localtime(& now); strftime(fname, sizeof fname, "/var/log/myapp.%Y-%m-%d-%H:%M", tm); fd = creat(name, 0600); ...
The year-month-day-hour-minute format causes the filenames to sort in the order they were created. NOTE Some time formats are more useful than others. For example, 12-hour times are ambiguous, as are any purely numeric date formats. (What does ‘9/11’ mean? It depends on where you live.) Similarly, two-digit years are also a bad idea. Use strftime() judiciously.
6.1.4 Converting a Broken-Down Time to a time_t Obtaining seconds-since-the-Epoch values from the system is easy; that’s how date and times are stored in inodes and returned from time() and stat(). These values are also easy to compare for equality or by < and > for simple earlier-than/later-than tests. However, dates entered by humans are not so easy to work with. For example, many versions of the touch command allow you to provide a date and time to which touch should set a file’s modification or access time (with utime(), as described in Section 5.5.3, “Changing Timestamps: utime(),” page 157). Converting a date as entered by a person into a time_t value is difficult: Leap years must be taken into account, time zones must be compensated for, and so on. Therefore, the C89 standard introduced the mktime() function: #include
ISO C
time_t mktime(struct tm *tm);
To use mktime(), fill in a struct tm with appropriate values: year, month, day, and so on. If you know whether daylight-saving time was in effect for the given date,
6.1 Times and Dates
177
set the tm_isdst field appropriately: 0 for “no,” and positive for “yes.” Otherwise, use a negative value for “don’t know.” The tm_wday and tm_yday fields are ignored. mktime() assumes that the struct tm represents a local time, not UTC. It returns time_t value representing the passed-in date and time, or it returns (time_t) -1
a if the given date/time cannot be represented correctly. Upon a successful return, all the values in the struct tm are adjusted to be within the correct ranges, and tm_wday and tm_yday are set correctly as well. Here is a simple example: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
/* ch06-echodate.c --- demonstrate mktime(). */ #include #include int main(void) { struct tm tm; time_t then; printf("Enter a Date/time as YYYY/MM/DD HH:MM:SS : "); scanf("%d/%d/%d %d:%d:%d", & tm.tm_year, & tm.tm_mon, & tm.tm_mday, & tm.tm_hour, & tm.tm_min, & tm.tm_sec); /* Error checking on values omitted for brevity. */ tm.tm_year -= 1900; tm.tm_mon--; tm.tm_isdst = -1;
/* Don't know about DST */
then = mktime(& tm); printf("Got: %s", ctime(& then)); exit(0); }
Line 11 prompts for a date and time, and lines 12–14 read it in. (Production code should check the return value from scanf().) Lines 17 and 18 compensate for the different basing of years and months, respectively. Line 22 indicates that we don’t know whether or not the given date and time represent daylight-saving time. Line 22 calls mktime(), and line 24 prints the result of the conversion. When compiled and run, we see that it works: $ ch06-echodate Enter a Date/time as YYYY/MM/DD HH:MM:SS : 2003/5/25 19:07:23 Got: Sun May 25 19:07:23 2003
Chapter 6 • General Library Interfaces — Part 1
178
6.1.5 Getting Time-Zone Information Early Unix systems embedded time-zone information into the kernel when it was compiled. The rules for daylight-saving time conversions were generally hard-coded, which was painful for users outside the United States or in places within the United States that didn’t observe DST. Modern systems have abstracted that information into binary files read by the C library when time-related functions are invoked. This technique avoids the need to recompile libraries and system executables when the rules change and makes it much easier to update the rules. The C language interface to time-zone information evolved across different Unix versions, both System V and Berkeley, until finally it was standardized by POSIX as follows: #include
POSIX
extern char *tzname[2]; extern long timezone; extern int daylight; void tzset(void);
The tzset() function examines the TZ environment variable to find time-zone and daylight-saving time information.1 If that variable isn’t set, then tzset() uses an “implementation-defined default time zone,” which is most likely the time zone of the machine you’re running on. After tzset() has been called, the local time-zone information is available in several variables: extern char *tzname[2]
The standard and daylight-saving time names for the time zone. For example, for U.S. locations in the Eastern time zone, the time-zone names are ‘EST’ (Eastern Standard Time) and ‘EDT’ (Eastern Daylight Time).
1 Although POSIX standardizes TZ’s format, it isn’t all that interesting, so we haven’t bothered to document it here. After all, it is tzset() that has to understand the format, not user-level code. Implementations can, and
do, use formats that extend POSIX.
6.1 Times and Dates
179
extern long timezone
The difference, in seconds, between the current time zone and UTC. The standard does not explain how this difference works. In practice, negative values represent time zones east of (ahead of, or later than) UTC; positive values represent time zones west of (behind, or earlier than) UTC. If you look at this value as “how much to change the local time to make it be the same as UTC,” then the sign of the value makes sense. extern int daylight
This variable is zero if daylight-saving time conversions should never be applied in the current time zone, and nonzero otherwise. NOTE The daylight variable does not indicate whether daylight-saving time is currently in effect! Instead, it merely states whether the current time zone can even have daylight-saving time.
The POSIX standard indicates that ctime(), localtime(), mktime(), and strftime() all act “as if ” they call tzset(). This means that they need not actually call tzset(), but they must behave as if it had been called. (The wording is intended to provide a certain amount of flexibility for implementors while guaranteeing correct behavior for user-level code.) In practice, this means that you will almost never have to call tzset() yourself. However, it’s there if you need it. 6.1.5.1 BSD Systems Gotcha: timezone(), Not timezone Instead of the POSIX timezone variable, a number of systems derived from 4.4 BSD provide a timezone() function: #include
BSD
char *timezone(int zone, int dst);
The zone argument is the number of minutes west of GMT, and dst is true if daylight-saving time is in effect. The return value is a string giving the name of the indicated zone, or a value expressed relative to GMT. This function provides compatibility with the V7 function of the same name and behavior.
Chapter 6 • General Library Interfaces — Part 1
180
Local Time: How Does It Know? GNU/Linux systems store time zone information in files and directories underneath /usr/share/zoneinfo: $ cd /usr/share/zoneinfo $ ls -FC Africa/ Canada/ Factory America/ Chile/ GB Antarctica/ Cuba GB-Eire Arctic/ EET GMT Asia/ EST GMT+0 Atlantic/ EST5EDT GMT-0 Australia/ Egypt GMT0 Brazil/ Eire Greenwich CET Etc/ HST CST6CDT Europe/ Hongkong
Iceland Indian/ Iran Israel Jamaica Japan Kwajalein Libya MET MST
MST7MDT Mexico/ Mideast/ NZ NZ-CHAT Navajo PRC PST8PDT Pacific/ Poland
Portugal ROC ROK Singapore SystemV/ Turkey UCT US/ UTC Universal
W-SU WET Zulu iso3166.tab posix/ posixrules right/ zone.tab
When possible, this directory uses hard links to provide the same data by multiple names. For example, the files EST5EDT and US/Eastern are really the same: $ ls -il EST5EDT US/Eastern 724350 -rw-r--r-5 root 724350 -rw-r--r-5 root
root root
1267 Sep 1267 Sep
6 6
2002 EST5EDT 2002 US/Eastern
Part of the process of installing a system is to choose the time zone. The correct timezone data file is then placed in /etc/localtime: $ file /etc/localtime /etc/localtime: timezone data
On our system, this is a standalone copy of the time-zone file for our time zone. On other systems, it may be a symbolic link to the file in /usr/share/zoneinfo. The advantage of using a separate copy is that everything still works if /usr isn’t mounted. The TZ environment variable, if set, overrides the default time zone: $ date Wed Nov 19 06:44:50 EST 2003 $ export TZ=PST8PDT $ date Wed Nov 19 03:45:09 PST 2003
Date and time in default time zone Change time zone to US West Coast Print date and time
This function’s widespread existence makes portable use of the POSIX timezone variable difficult. Fortunately, we don’t see a huge need for it: strftime() should be sufficient for all but the most unusual needs.
6.2 Sorting and Searching Functions
181
6.2 Sorting and Searching Functions Sorting and searching are two fundamental operations, the need for which arises continually in many applications. The C library provides a number of standard interfaces for performing these tasks. All the routines share a common theme; data are managed through void * pointers, and user-provided functions supply ordering. Note also that these APIs apply to inmemory data. Sorting and searching structures in files is considerably more involved and beyond the scope of an introductory text such as this one. (However, the sort command works well for text files; see the sort(1) manpage. Sorting binary files requires that a special-purpose program be written.) Because no one algorithm works well for all applications, there are several different sets of library routines for maintaining searchable collections of data. This chapter covers only one simple interface for searching. Another, more advanced, interface is described in Section 14.4, “Advanced Searching with Binary Trees,” page 551. Furthermore, we purposely don’t explain the underlying algorithms, since this is a book on APIs, not algorithms and data structures. What’s important to understand is that you can treat the APIs as “black boxes” that do a particular job, without needing to understand the details of how they do the job.
6.2.1 Sorting: qsort() Sorting is accomplished with qsort(): #include
ISO C
void qsort(void *base, size_t nmemb, size_t size, int (*compare)(const void *, const void *));
The name qsort() comes from C.A.R. Hoare’s Quicksort algorithm, which was used in the initial Unix implementation. (Nothing in the POSIX standard dictates the use of this algorithm for qsort(). The GLIBC implementation uses a highly optimized combination of Quicksort and Insertion Sort.) sorts arrays of arbitrary objects. It works by shuffling opaque chunks of memory from one spot within the array to another and relies on you, the programmer, to provide a comparison function that allows it to determine the ordering of one array element relative to another. The arguments are as follows: qsort()
Chapter 6 • General Library Interfaces — Part 1
182
void *base
The address of the beginning of the array. size_t nmemb
The total number of elements in the array. size_t size
The size of each element in the array. The best way to obtain this value is with the C sizeof operator. int (*compare)(const void *, const void *)
A possibly scary declaration for a function pointer. It says that “compare points to a function that takes two ‘const void *’ parameters, and returns an int.” Most of the work is in writing a proper comparison function. The return value should mimic that of strcmp(): less than zero if the first value is “less than” the second, zero if they are equal, and greater than zero if the first value is “greater than” the second. It is the comparison function that defines the meaning of “less than” and “greater than” for whatever it is you’re sorting. For example, to compare two double values, we could use this function: int dcomp(const void *d1p, const void *d2p) { const double *d1, *d2; d1 = (const double *) d1p; d2 = (const double *) d2p;
Cast pointers to right type
if (*d1 < *d2) Compare and return right value return -1; else if (*d1 > *d2) return 1; else if (*d1 == *d2) return 0 else return -1; /* NaN sorts before real numbers */ }
This shows the general boilerplate for a comparison function: convert the arguments from void * to pointers to the type being compared and then return a comparison value. For floating-point values, a simple subtraction such as ‘return *d1 - *d2’ doesn’t work, particularly if one value is very small or if one or both values are special
6.2 Sorting and Searching Functions
183
“not a number” or “infinity” values. Thus, we have to do the comparison manually, including taking into account the not-a-number value (which doesn’t even compare equal to itself!). 6.2.1.1 Example: Sorting Employees For more complicated structures, a more involved function is necessary. For example, consider the following (rather trivial) struct employee: struct employee { char lastname[30]; char firstname[30]; long emp_id; time_t start_date; };
We might write a function to sort employees by last name, first name, and ID number: int emp_name_id_compare(const void *e1p, const void *e2p) { const struct employee *e1, *e2; int last, first; e1 = (const struct employee *) e1p; e2 = (const struct employee *) e2p;
Convert pointers
if ((last = strcmp(e1->lastname, e2->lastname)) != 0) return last;
Compare last names Last names differ
/* same last name, check first name */ if ((first = strcmp(e1->firstname, e2->firstname)) != 0) return first;
Compare first names First names differ
/* same first name, check ID numbers */ if (e1->emp_id < e2->emp_id) return -1; else if (e1->emp_id == e2->emp_id) return 0; else return 1;
Compare employee ID
}
The logic here is straightforward, initially comparing on last names, then first names, and then using the employee ID number if the two names are the same. By using strcmp() on strings, we automatically get the right kind of negative/zero/positive value to return. The employee ID comparison can’t just use subtraction: suppose long is 64 bits and int is 32 bits, and the two values differ only in the upper 32 bits (say the lower 32 bits
184
Chapter 6 • General Library Interfaces — Part 1
are zero). In such a case, the subtraction result would automatically be cast to int, throwing away the upper 32 bits and returning an incorrect value. NOTE We could have stopped with the comparison on first names, in which case all employees with the same last and first names would be grouped, but without any other ordering. This point is important: qsort() does not guarantee a stable sort. A stable sort is one in which, if two elements compare equal based on some key value(s), they will maintain their original ordering, relative to each other, in the final sorted array. For example, consider three employees with the same first and last names, with employee numbers 17, 42, and 81. Their order in the original array might have been 42, 81, and 17. (Meaning, employee 42 is at a lower index than employee 81, who, in turn, is at a lower index than employee 17.) After sorting, the order might be 81, 42, and 17. If this is an issue, then the comparison routine must take all important key values into consideration. (Ours does.)
Simply by using a different function, we can sort employees by seniority: int emp_seniority_compare(const void *e1p, const void *e2p) { const struct employee *e1, *e2; double diff; e1 = (const struct employee *) e1p; e2 = (const struct employee *) e2p;
Cast pointers to correct type
diff = difftime(e1->start_date, e2->start_date); if (diff < 0) return -1; else if (diff > 0) return 1; else return 0;
Compare times
}
For maximum portability we have used difftime(), which returns the difference in seconds between two time_t values. For this specific case, a cast such as— return (int) difftime(e1->start_date, e2->start_date);
6.2 Sorting and Searching Functions
185
—should do the trick, since time_t values are within reasonable ranges. Nevertheless, we instead use a full three-way if statement, just to be safe. Here is a sample data file, listing five U.S. presidents: $ cat presdata.txt Bush George 43 980013600 Clinton William 42 727552800 Bush George 41 601322400 Reagan Ronald 40 348861600 Carter James 39 222631200
Last name, first name, president number, inauguration
ch06-sortemp.c shows a simple program that reads this file into a struct employee array and then sorts it, using the two different comparison functions
just presented. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
/* ch06-sortemp.c --- Demonstrate qsort() with two comparison functions. */
39 40 41 42 43 44
}
58 59 60 61
}
#include #include #include struct employee { char lastname[30]; char firstname[30]; long emp_id; time_t start_date; }; /* emp_name_id_compare --- compare by name, then by ID */ int emp_name_id_compare(const void *e1p, const void *e2p) {
… as shown previously, omitted to save space … /* emp_seniority_compare --- compare by seniority */ int emp_seniority_compare(const void *e1p, const void *e2p) {
… as shown previously, omitted to save space … /* main --- demonstrate sorting */
Chapter 6 • General Library Interfaces — Part 1
186
62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104
int main(void) { #define NPRES 10 struct employee presidents[NPRES]; int i, npres; char buf[BUFSIZ]; /* Very simple code to read data: */ for (npres = 0; npres < NPRES && fgets(buf, BUFSIZ, stdin) != NULL; npres++) { sscanf(buf, "%s %s %ld %ld\n", presidents[npres].lastname, presidents[npres].firstname, & presidents[npres].emp_id, & presidents[npres].start_date); } /* npres is now number of actual lines read. */ /* First, sort by name */ qsort(presidents, npres, sizeof(struct employee), emp_name_id_compare); /* Print output */ printf("Sorted by name:\n"); for (i = 0; i < npres; i++) printf("\t%s %s\t%d\t%s", presidents[i].lastname, presidents[i].firstname, presidents[i].emp_id, ctime(& presidents[i].start_date)); /* Now, sort by seniority */ qsort(presidents, npres, sizeof(struct employee), emp_seniority_compare); /* And print again */ printf("Sorted by seniority:\n"); for (i = 0; i < npres; i++) printf("\t%s %s\t%d\t%s", presidents[i].lastname, presidents[i].firstname, presidents[i].emp_id, ctime(& presidents[i].start_date)); }
Lines 70–77 read in the data. Note that any use of scanf() requires “well behaved” input data. If, for example, any name is more than 29 characters, there’s a problem. In this case, we’re safe, but production code must be considerably more careful. Line 82 sorts the data by name and employee ID, and then lines 84–91 print the sorted data. Similarly, line 94 re-sorts the data, this time by seniority, with lines 97–103 printing the results. When compiled and run, the program produces the following results:
6.2 Sorting and Searching Functions
$ ch06-sortemp < presdata.txt Sorted by name: Bush George 41 Fri Bush George 43 Sat Carter James 39 Thu Clinton William 42 Wed Reagan Ronald 40 Tue Sorted by seniority: Carter James 39 Thu Reagan Ronald 40 Tue Bush George 41 Fri Clinton William 42 Wed Bush George 43 Sat
187
Jan Jan Jan Jan Jan
20 20 20 20 20
13:00:00 13:00:00 13:00:00 13:00:00 13:00:00
1989 2001 1977 1993 1981
Jan Jan Jan Jan Jan
20 20 20 20 20
13:00:00 13:00:00 13:00:00 13:00:00 13:00:00
1977 1981 1989 1993 2001
(We’ve used 1:00 p.m. as an approximation for the time when all of the presidents started working.2) One point is worth mentioning: qsort() rearranges the data in the array. If each array element is a large structure, a lot of data will be copied back and forth as the array is sorted. It may pay, instead, to set up a separate array of pointers, each of which points at one element of the array. Then use qsort() to sort the pointer array, accessing the unsorted data through the sorted pointers. The price paid is the extra memory to hold the pointers and modification of the comparison function to use an extra pointer indirection when comparing the structures. The benefit returned can be a considerable speedup, since only a four- or eight-byte pointer is moved around at each step, instead of a large structure. (Our struct employee is at least 68 bytes in size. Swapping four-byte pointers moves 17 times less data than does swapping structures.) For thousands of in-memory structures, the difference can be significant. NOTE If you’re a C++ programmer, beware! qsort() may be dangerous to use with arrays of objects! qsort() does raw memory moves, copying bytes. It’s completely unaware of C++ constructs such as copy constructors or operator=() functions. Instead, use one of the STL sorting functions, or use the separate-array-of-pointers technique.
2 The output shown here is for U.S. Eastern Standard Time. You will get different results for the same program
and data if you use a different time zone.
188
Chapter 6 • General Library Interfaces — Part 1
6.2.1.2 Example: Sorting Directory Contents In Section 5.3, “Reading Directories,” page 132, we demonstrated that directory entries are returned in physical directory order. Most of the time, it’s much more useful to have directory contents sorted in some fashion, such as by name or by modification time. While not standardized by POSIX, several routines make it easy to do this, using qsort() as the underlying sorting agent: #include
Common
int scandir(const char *dir, struct dirent ***namelist, int (*select)(const struct dirent *), int (*compare)(const struct dirent **, const struct dirent **)); int alphasort(const void *a, const void *b); int versionsort(const void *a, const void *b);
GLIBC
The scandir() and alphasort() functions were made available in 4.2 BSD and are widely supported.3 versionsort() is a GNU extension. reads the directory named by dir, creates an array of struct dirent pointers by using malloc(), and sets *namelist to point to the beginning of that array. Both the array of pointers and the pointed-to struct dirent structures are allocated with malloc(); it is up to the calling code to use free() to avoid memory leaks. scandir()
Use the select function pointer to choose entries of interest. When this value is NULL, all valid directory entries are included in the final array. Otherwise, (*select)() is called for each entry, and those entries for which it returns nonzero (true) are included in the array. The compare function pointer compares two directory entries. It is passed to qsort() for use in sorting. alphasort() compares filenames lexicographically. It uses the strcoll() function for comparison. strcoll() is similar to strcmp() but takes locale-related sorting rules into consideration (see Section 13.4, “Can You Spell That for Me, Please?”, page 521). versionsort() is a GNU extension, that uses the GNU strverscmp() function
to compare filenames (see strverscmp(3)). To make a long story short, this function understands common filename versioning conventions and compares appropriately.
3 One notable exception is Sun’s Solaris, where these two functions exist only in the hard-to-use BSD compatibil-
ity library.
6.2 Sorting and Searching Functions
189
ch06-sortdir.c shows a program similar to ch04-catdir.c. scandir() and alphasort() to do the work. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42
However, it uses
/* ch06-sortdir.c --- Demonstrate scandir(), alphasort(). */ #include #include #include #include
/* /* /* /*
for for for for
printf() etc. */ errno */ system types */ directory functions */
char *myname; int process(const char *dir); /* main --- loop over directory arguments */ int main(int argc, char **argv) { int i; int errs = 0; myname = argv[0]; if (argc == 1) errs = process("."); /* default to current directory */ else for (i = 1; i < argc; i++) errs += process(argv[i]); return (errs != 0); } /* nodots --- ignore dot files, for use by scandir() */ int nodots(const struct dirent *dp) { return (dp->d_name[0] != '.'); } /* * process --- do something with the directory, in this case, * print inode/name pairs on standard output. * Return 0 if all OK, 1 otherwise. */
Chapter 6 • General Library Interfaces — Part 1
190
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
int process(const char *dir) { DIR *dp; struct dirent **entries; int nents, i; nents = scandir(dir, & entries, nodots, alphasort); if (nents < 0) { fprintf(stderr, "%s: scandir failed: %s\n", myname, strerror(errno)); return 1; } for (i = 0; i < nents; i++) { printf("%8ld %s\n", entries[i]->d_ino, entries[i]->d_name); free(entries[i]); } free(entries); return 0; }
The main() program (lines 1–27) follows the standard boilerplate we’ve used before. The nodots() function (lines 31–35) acts as the select parameter, choosing only filenames that don’t begin with a period. The process() function (lines 43–65) is quite simple, with scandir() doing most of the work. Note how each element is released separately with free() (line 59) and how the entire array is also released (line 62). When run, the directory contents do indeed come out in sorted order, without ‘.’ and ‘..’: $ ch06-sortdir 2097176 00-preface.texi 2097187 01-intro.texi 2097330 02-cmdline.texi 2097339 03-memory.texi 2097183 03-memory.texi.save 2097335 04-fileio.texi 2097334 05-fileinfo.texi 2097332 06-general1.texi ...
Default actions displays current directory
6.2 Sorting and Searching Functions
191
6.2.2 Binary Searching: bsearch() A linear search is pretty much what it sounds like: You start at the beginning, and walk through an array being searched until you find what you need. For something simple like finding integers, this usually takes the form of a for loop. Consider this function: /* ifind --- linear search, return index if found or -1 if not */ int ifind(int x, const int array[], size_t nelems) { size_t i; for (i = 0; i < nelems; i++) if (array[i] == x) /* found it */ return i; return -1; }
The advantage to linear searching is that it’s simple; it’s easy to write the code correctly the first time. Furthermore, it always works. Even if elements are added to the end of the array or removed from the array, there’s no need to sort the array. The disadvantage to linear searching is that it’s slow. On average, for an array containing nelems elements, a linear search for a random element does ‘nelems / 2’ comparisons before finding the desired element. This becomes prohibitively expensive, even on modern high-performance systems, as nelems becomes large. Thus, you should only use linear searching on small arrays. Unlike a linear search, binary searching requires that the input array already be sorted. The disadvantage here is that if elements are added, the array must be re-sorted before it can be searched. (When elements are removed, the rest of the array contents must still be shuffled down. This is not as expensive as re-sorting, but it can still involve a lot of data motion.) The advantage to binary searching, and it’s a significant one, is that binary searching is blindingly fast, requiring at most log 2(N ) comparisons, where N is the number of elements in the array. The bsearch() function is declared as follows: #include
ISO C
void *bsearch(const void *key, const void *base, size_t nmemb, size_t size, int (*compare)(const void *, const void *));
Chapter 6 • General Library Interfaces — Part 1
192
The parameters and their purposes are similar to those of qsort(): const void *key
The object being searched for in the array. const void *base
The start of the array. size_t nmemb
The number of elements in the array. size_t size
The size of each element, obtained with sizeof. int (*compare)(const void *, const void *)
The comparison function. It must work the same way as the qsort() comparison function, returning negative/zero/positive according to whether the first parameter is less than/equal to/greater than the second one. returns NULL if the object is not found. Otherwise, it returns a pointer to the found object. If more than one array element matches key, it is unspecified which one is returned. Thus, as with qsort(), make sure that the comparison function accounts for all relevant parts of the searched data structure. bsearch()
ch06-searchemp.c shows bsearch() in practice, extending the struct employee
example used previously. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
/* ch06-searchemp.c --- Demonstrate bsearch(). */ #include #include #include struct employee { char lastname[30]; char firstname[30]; long emp_id; time_t start_date; }; /* emp_id_compare --- compare by ID */
6.2 Sorting and Searching Functions
16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37
193
int emp_id_compare(const void *e1p, const void *e2p) { const struct employee *e1, *e2; e1 = (const struct employee *) e1p; e2 = (const struct employee *) e2p; if (e1->emp_id < e2->emp_id) return -1; else if (e1->emp_id == e2->emp_id) return 0; else return 1; } /* print_employee --- print an employee structure */ void print_employee(const struct employee *emp) { printf("%s %s\t%d\t%s", emp->lastname, emp->firstname, emp->emp_id, ctime(& emp->start_date)); }
Lines 7–12 define the struct employee; it’s the same as before. Lines 16–29 serve as the comparison function, for both qsort() and bsearch(). It compares on employee ID number only. Lines 33–37 define print_employee(), which is a convenience function for printing the structure since this is done from multiple places. 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62
/* main --- demonstrate sorting */ int main(int argc, char **argv) { #define NPRES 10 struct employee presidents[NPRES]; int i, npres; char buf[BUFSIZ]; struct employee *the_pres; struct employee key; int id; FILE *fp; if (argc != 2) { fprintf(stderr, "usage: %s datafile\n", argv[0]); exit(1); } if ((fp = fopen(argv[1], "r")) == NULL) { fprintf(stderr, "%s: %s: could not open: %s\n", argv[0], argv[1], strerror(errno)); exit(1); }
Chapter 6 • General Library Interfaces — Part 1
194
63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106
/* Very simple code to read data: */ for (npres = 0; npres < NPRES && fgets(buf, BUFSIZ, fp) != NULL; npres++) { sscanf(buf, "%s %s %ld %ld", presidents[npres].lastname, presidents[npres].firstname, & presidents[npres].emp_id, & presidents[npres].start_date); } fclose(fp); /* npres is now number of actual lines read. */ /* First, sort by id */ qsort(presidents, npres, sizeof(struct employee), emp_id_compare); /* Print output */ printf("Sorted by ID:\n"); for (i = 0; i < npres; i++) { putchar('\t'); print_employee(& presidents[i]); } for (;;) { printf("Enter ID number: "); if (fgets(buf, BUFSIZ, stdin) == NULL) break; sscanf(buf, "%d\n", & id); key.emp_id = id; the_pres = (struct employee *) bsearch(& key, presidents, npres, sizeof(struct employee), emp_id_compare); if (the_pres != NULL) { printf("Found: "); print_employee(the_pres); } else printf("Employee with ID %d not found!\n", id); } putchar('\n');
/* Print a newline on EOF. */
exit(0); }
The main() function starts with argument checking (lines 52–55). It then reads the data from the named file (lines 57–72). Standard input cannot be used for the employee data, since that is reserved for prompting the user for the employee ID to search for.
6.3 User and Group Names
195
Lines 77–84 sort the data and then print them. The program then goes into a loop, starting on line 86. It prompts for an employee ID number, exiting the loop upon end-of-file. To search the array, we use the struct employee named key. It’s enough to set just its emp_id field to the entered ID number; none of the other fields are used in the comparison (line 92). If an entry is found with the matching key, bsearch() returns a pointer to it. Otherwise it returns NULL. The return is tested on line 96, and appropriate action is then taken. Finally, line 102 prints a newline character so that the system prompt will come out on a fresh line. Here’s a transcript of what happens when the program is compiled and run: $ ch06-searchemp presdata.txt Sorted by ID: Carter James 39 Thu Jan 20 Reagan Ronald 40 Tue Jan 20 Bush George 41 Fri Jan 20 Clinton William 42 Wed Jan 20 Bush George 43 Sat Jan 20 Enter ID number: 42 Found: Clinton William 42 Wed Jan Enter ID number: 29 Employee with ID 29 not found! Enter ID number: 40 Found: Reagan Ronald 40 Tue Jan Enter ID number: ^D $
Run the program 13:00:00 13:00:00 13:00:00 13:00:00 13:00:00
1977 1981 1989 1993 2001
20 13:00:00 1993
20 13:00:00 1981
Enter a valid number It’s found Enter an invalid number It’s not found Try another good one This one is found too CTRL-D entered for EOF Ready for next command
Additional, more advanced, APIs for searching data collections are described in Section 14.4, “Advanced Searching with Binary Trees,” page 551.
6.3 User and Group Names While the operating system works with user and group ID numbers for storage of file ownership and for permission checking, humans prefer to work with user and group names. Early Unix systems kept the information that mapped names to ID numbers in simple text files, /etc/passwd and /etc/group. These files still exist on modern systems, and their format is unchanged from that of V7 Unix. However, they no longer tell the complete story. Large installations with many networked hosts keep the information in network databases: ways of storing the information on a small number of
196
Chapter 6 • General Library Interfaces — Part 1
servers that are then accessed over the network.4 However, this usage is transparent to most applications since access to the information is done through the same API as was used for retrieving the information from the text files. It is for this reason that POSIX standardizes only the APIs; the /etc/passwd and /etc/group files need not exist, as such, for a system to be POSIX compliant. The APIs to the two databases are similar; most of our discussion focuses on the user database.
6.3.1 User Database The traditional /etc/passwd format maintains one line per user. Each line has seven fields, each of which is separated from the next by a colon character: $ grep arnold /etc/passwd arnold:x:2076:10:Arnold D. Robbins:/home/arnold:/bin/bash
In order, the fields are as follows: The user name This is what the user types to log in, what shows up for ‘ls -l’ and in any other context that displays users. The password field On older systems, this is the user’s encrypted password. On newer systems, this field is likely to be an x (as shown), meaning that the password information is held in a different file. This separation is a security measure; if the encrypted password isn’t available to nonprivileged users, it is much harder to “crack.” The user ID number This should be unique; one number per user. The group ID number This is the user’s initial group ID number. As is discussed later, on modern systems processes have multiple groups associated with them.
4 Common network databases include Sun Microsystems’ Network Information Service (NIS) and NIS+, Kerberos
(Hesiod), MacOS X NetInfo (versions up to and including 10.2), and LDAP, the Lightweight Directory Access Protocol. BSD systems keep user information in on-disk databases and generate the /etc/passwd and /etc/group files automatically.
6.3 User and Group Names
197
The user’s real name This is at least a first and last name. Some systems allow for comma-separated fields, for office location, phone number, and so on, but this is not standardized. The login directory This directory becomes the home directory for users when they log in ($HOME—the default for the cd command). The login program The program to run when the user logs in. This is usually a shell, but it need not be. If this field is left empty, the default is /bin/sh. Access to the user database is through the routines declared in : XSI
#include #include struct passwd *getpwent(void); void setpwent(void); void endpwent(void); struct passwd *getpwnam(const char *name); struct passwd *getpwuid(uid_t uid);
The fields in the struct passwd used by the various API routines correspond directly to the fields in the password file: struct passwd { char *pw_name; char *pw_passwd; uid_t pw_uid; gid_t pw_gid; char *pw_gecos; char *pw_dir; char *pw_shell; };
/* /* /* /* /* /* /*
user name */ user password */ user id */ group id */ real name */ home directory */ shell program */
(The name pw_gecos is historical; when the early Unix systems were being developed, this field held the corresponding information for the user’s account on the Bell Labs Honeywell systems running the GECOS operating system.) The purpose of each routine is described in the following list. struct passwd *getpwent(void)
Returns a pointer to an internal static struct passwd structure containing the “current” user’s information. This routine reads through the entire password database, one record at a time, returning a pointer to a structure for each user.
Chapter 6 • General Library Interfaces — Part 1
198
The same pointer is returned each time; that is, the internal struct passwd is overwritten for each user’s entry. When getpwent() reaches the end of the password database, it returns NULL. Thus, it lets you step through the entire database, one user at a time. The order in which records are returned is undefined. void setpwent(void)
Resets the internal state such that the next call to getpwent() returns the first record in the password database. void endpwent(void)
“Closes the database,” so to speak, be it a simple file, network connection, or something else. struct passwd *getpwnam(const char *name) Looks up the user with a pw_name member equal to name, returning a pointer to a static struct passwd describing the user or NULL if the user is not found. struct passwd *getpwuid(uid_t uid)
Similarly, looks up the user with the user ID number given by uid, returning a pointer to a static struct passwd describing the user or NULL if the user is not found. getpwuid() is what’s needed when you have a user ID number (such as from a struct stat) and you wish to print the corresponding user name. getpwnam() converts a name to a user ID number, for example, if you wish to use chown() or fchown()
on a file. In theory, both of these routines do a linear search through the password database to find the desired information. This is true in practice when a password file is used; however, behind-the-scenes databases (network or otherwise, as on BSD systems) tend to use more efficient methods of storage, so these calls are possibly not as expensive in such a case.5 is useful when you need to go through the entire password database. For instance, you might wish to read it all into memory, sort it, and then search it quickly with bsearch(). This is very useful for avoiding the multiple linear searches inherent in looking things up one at a time with getpwuid() or getpwnam(). getpwent()
5 Unfortunately, if performance is an issue, there’s no standard way to know how your library does things, and
indeed, the way it works can vary at runtime! (See the nsswitch.conf (5) manpage on a GNU/Linux system.) On the other hand, the point of the API is, after all, to hide the details.
6.3 User and Group Names
199
NOTE The pointers returned by getpwent(), getpwnam(), and getpwuid() all point to internal static data. Thus, you should make a copy of their contents if you need to save the information. Take a good look at the struct passwd definition. The members that represent character strings are pointers; they too point at internal static data, and if you’re going to copy the structure, make sure to copy the data each member points to as well.
6.3.2 Group Database The format of the /etc/group group database is similar to that of /etc/passwd, but with fewer fields: $ grep arnold /etc/group mail:x:12:mail,postfix,arnold uucp:x:14:uucp,arnold floppy:x:19:arnold devel:x:42:miriam,arnold arnold:x:2076:arnold
Again, there is one line per group, with fields separated by colons. The fields are as follows: The group name This is the name of the group, as shown in ‘ls -l’ or in any other context in which a group name is needed. The group password This field is historical. It is no longer used. The group ID number As with the user ID, this should be unique to each group. The user list This is a comma-separated list of users who are members of the group. In the previous example, we see that user arnold is a member of multiple groups. This membership is reflected in practice in what is termed the group set. Besides the main user ID and group ID number that processes have, the group set is a set of additional group ID numbers that each process carries around with it. The system checks all of these group ID numbers against a file’s group ID number when performing
Chapter 6 • General Library Interfaces — Part 1
200
permission checking. This subject is discussed in more detail in Chapter 11, “Permissions and User and Group ID Numbers,” page 403. The group database APIs are similar to those for the user database. The following functions are declared in : XSI
#include #include struct group *getgrent(void); void setgrent(void); void endgrent(void); struct group *getgrnam(const char *name); struct group *getgrgid(gid_t gid);
The struct group corresponds to the records in /etc/group: struct group { char *gr_name; char *gr_passwd; gid_t gr_gid; char **gr_mem; };
/* /* /* /*
group group group group
name */ password */ id */ members */
The gr_mem field bears some explanation. While declared as a pointer to a pointer (char **), it is best thought of as an array of strings (like argv). The last element in the array is set to NULL. When no members are listed, the first element in the array is NULL. demonstrates how to use the struct group and the gr_mem field. The program accepts a single user name on the command line and prints all group records in which that user name appears: ch06-groupinfo.c
1 2 3 4 5 6 7 8 9 10
/* ch06-groupinfo.c --- Demonstrate getgrent() and struct group */ #include #include #include extern void print_group(const struct group *gr); /* main --- print group lines for user named in argv[1] */
6.3 User and Group Names
11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
201
int main(int argc, char **argv) { struct group *gr; int i; if (argc != 2) { fprintf(stderr, "usage: %s user\n", argv[0]); exit(1); }
Check arguments
while ((gr = getgrent()) != NULL) for (i = 0; gr->gr_mem[i] != NULL; i++) if (strcmp(gr->gr_mem[i], argv[1]) == 0) print_group(gr);
Get each group record Look at each member If found the user … Print the record
endgrent(); exit(0); }
The main() routine first does error checking (lines 17–20). The heart of the program is a nested loop. The outer loop (line 22) loops over all the group database records. The inner loop (line 23) loops over the members of the gr_mem array. If one of the members matches the name from the command line (line 24), then print_group() is called to print the record (line 25). 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48
/* print_group --- print a group record */ void print_group(const struct group *gr) { int i; printf("%s:%s:%ld:", gr->gr_name, gr->gr_passwd, (long) gr->gr_gid); for (i = 0; gr->gr_mem[i] != NULL; i++) { printf("%s", gr->gr_mem[i]); if (gr->gr_mem[i+1] != NULL) putchar(','); } putchar('\n'); }
The print_group() function (lines 34–48) is straightforward, with logic similar to that of main() for printing the member list. Group list members are comma separated; thus, the loop body has to check that the next element in the array is not NULL before printing a comma. This code works correctly, even if there are no members in
202
Chapter 6 • General Library Interfaces — Part 1
the group. However, for this program, we know there are members, or print_group() wouldn’t have been called! Here’s what happens when the program is run: $ ch06-groupinfo arnold mail:x:12:mail,postfix,arnold uucp:x:14:uucp,arnold floppy:x:19:arnold devel:x:42:miriam,arnold arnold:x:2076:arnold
6.4 Terminals: isatty() The Linux/Unix standard input, standard output, standard error model discourages the special treatment of input and output devices. Programs generally should not need to know, or care, whether their output is a terminal, a file, a pipe, a physical device, or whatever. However, there are times when a program really does need to know what kind of a file a file descriptor is associated with. The stat() family of calls often provides enough information: regular file, directory, device, and so on. Sometimes though, even that is not enough, and for interactive programs in particular, you may need to know if a file descriptor represents a tty A tty (short for Teletype, one of the early manufacturers of computer terminals) is any device that represents a terminal, that is, something that a human would use to interact with the computer. This may be either a hardware device, such as the keyboard and monitor of a personal computer, an old-fashioned video display terminal connected to a computer by a serial line or modem, or a software pseudoterminal, such as is used for windowing systems and network logins. The discrimination can be made with isatty(): #include
POSIX
int isatty(int desc);
This function returns 1 if the file descriptor desc represents a terminal, 0 otherwise. According to POSIX, isatty() may set errno to indicate an error; thus you should set errno to 0 before calling isatty() and then check its value if the return is 0. (The GNU/Linux isatty(3) manpage doesn’t mention the use of errno.) The POSIX standard also points out that just because isatty() returns 1 doesn’t mean there’s a human at the other end of the file descriptor!
6.6 Summary
203
One place where isatty() comes into use is in modern versions of ls, in which the default is to print filenames in columns if the standard output is a terminal and to print them one per line if not.
6.5 Suggested Reading 1. Mastering Algorithms With C, by Kyle Loudon. O’Reilly & Associates, Sebastopol, California, USA, 1999. ISBN: 1-56592-453-3. This book provides a practical, down-to-earth introduction to algorithms and data structures using C, covering hash tables, trees, sorting, and searching, among other things. 2. The Art of Computer Programming Volume 3: Sorting and Searching, 2nd edition, by Donald E. Knuth. Addison-Wesley, Reading Massachusetts, USA, 1998. ISBN: 0-201-89685-0. This book is usually cited as the final word on sorting and searching. Bear in mind that it is considerably denser and harder to read than the Loudon book. 3. The GTK+ project6 consists of several libraries that work together. GTK+ is the underlying toolkit used by the GNU GNOME Project.7 At the base of the library hierarchy is Glib, a library of fundamental types and data structures and functions for working with them. Glib includes facilities for all the basic operations we’ve covered so far in this book, and many more, including linked lists and hash tables. To see the online documentation, start at the GTK+ Documentation Project’s web site,8 click on the “Download” link, and proceed to the online version.
6.6 Summary • Times are stored internally as time_t values, representing “seconds since the Epoch.” The Epoch is Midnight, January 1, 1970 UTC for GNU/Linux and Unix
6 http://www.gtk.org 7 http://www.gnome.org 8 http://www.gtk.org/rdp
204
Chapter 6 • General Library Interfaces — Part 1
systems. The current time is retrieved from the system by the time() system call, and difftime() returns the difference, in seconds, between two time_t values. • The struct tm structure represents a “broken-down time,” which is a much more usable representation of a date and time. gmtime() and localtime() convert time_t values into struct tm values, and mktime() goes in the opposite direction. • asctime() and ctime() do simplistic formatting of time values, returning a pointer to a fixed-size, fixed-format static character string. strftime() provides much more flexible formatting, including locale-based values. • Time-zone information is made available by a call to tzset(). Since the standard routines act as if they call tzset() automatically, it is rare to need to call this function directly. • The standard routine for sorting arrays is qsort(). By using a user-provided comparison function and being told the number of array elements and their size, qsort() can sort any kind of data. This provides considerable flexibility. • scandir() reads an entire directory into an array of struct dirent. User-provided functions can be used to select which entries to include and can provide ordering of elements within the array. alphasort() is a standard function for sorting directory entries by name; scandir() passes the sorting function straight through to qsort(). • The bsearch() function works similarly to qsort(). It does fast binary searching. Use it if the cost of linear searching outweighs the cost of sorting your data. (An additional API for searching data collections is described in Section 14.4, “Advanced Searching with Binary Trees,” page 551.) • The user and group databases may be kept in local disk files or may be made available over a network. The standard API purposely hides this distinction. Each database provides both linear scanning of the entire database and direct queries for a user/group name or user/group ID. • Finally, for those times when stat() just isn’t enough, isatty() can tell you whether or not an open file represents a terminal device.
6.7 Exercises
205
Exercises 1. Write a simple version of the date command that accepts a format string on the command line and uses it to format and print the current time. 2. When a file is more than six months old, ‘ls -l’ uses a simpler format for printing the modification time. The file GNU version of ls.c uses this computation: 3043 3044 3045 3046 3047
/* Consider a time to be recent if it is within the past six months. A Gregorian year has 365.2425 * 24 * 60 * 60 == 31556952 seconds on the average. Write this value as an integer constant to avoid floating point hassles. */ six_months_ago = current_time - 31556952 / 2;
Compare this to our example computation for computing the time six months in the past. What are the advantages and disadvantages of each method? 3. Write a simple version of the touch command that changes the modification time of the files named on the command line to the current time. 4. Add an option to your touch command that accepts a date and time specification on the command line and uses that value as the new modification time of the files named on the command line. 5. Add another option to your version of touch that takes a filename and uses the modification time of the given file as the new modification time for the files named on the command line. 6. Enhance ch06-sortemp.c to sort a separate array of pointers that point into the array of employees. 7. Add options to ch06-sortdir.c to sort by inode number, modification time, access time, and size. Add a “reverse option” such that time-based sorts make the most recent file first and other criteria (size, inode) sort by largest value first. 8. Write a simple version of the chown command. Its usage should be chown user[:group] files ...
Here, user and group are user and group names representing the new user and group for the named files. The group is optional; if present it is separated from the user by a colon. To test your version on a GNU/Linux system, you will have to work as root. Do so carefully!
206
Chapter 6 • General Library Interfaces — Part 1
9. Enhance your chown to allow numeric user or group numbers, as well as names. 10. Write functions to copy user and group structures, including pointed-to data. Use malloc() to allocate storage as needed. 11. Write a specialized user-lookup library that reads the entire user database into a dynamically allocated array. Provide fast lookup of users, by both user ID number and name. Be sure to handle the case in which a requested user isn’t found. 12. Do the same thing for the group database. 13. Write a stat program that prints the contents of the struct stat for each file named on the command line. It should print all the values in humanreadable format: time_t values as dates and times, uid_t and gid_t values as the corresponding names (if available), and the contents of symbolic links. Print the st_mode field the same way that ls would. Compare your program to the GNU Coreutils stat program, both by comparing outputs and by looking at the source code.
7 Putting It All Together: ls
In this chapter 7.1 V7 ls Options
page 208
7.2 V7 ls Code
page 209
7.3 Summary
page 225
Exercises
page 226
207
he V7 ls command nicely ties together everything we’ve seen so far. It uses almost all of the APIs we’ve covered, touching on many aspects of Unix programming: memory allocation, file metadata, dates and times, user names, directory reading, and sorting.
T
7.1 V7 ls Options In comparison to modern versions of ls, the V7 ls accepted only a handful of options and the meaning of some of them is different for V7 than for current ls. The options are as follows: -a
Print all directory entries. Without this, don’t print ‘.’ and ‘..’. Interestingly enough, V7 ls ignores only ‘.’ and ‘..’, while V1 through V6 ignore any file whose name begins with a period. This latter behavior is the default in modern versions of ls, as well.
-c
Use the inode change time, instead of the modification time, with -t or -l.
-d
For directory arguments, print information about the directory itself, not its contents.
-f
“Force” each argument to be read as a directory, and print the name found in each slot. This options disables -l, -r, -s and -t, and enables -a. (This option apparently existed for filesystem debugging and repair.)
-g
For ‘ls -l’, use the group name instead of the user name.
-i
Print the inode number in the first column along with the filename or the long listing.
-l
Provide the familiar long format output. Note, however, that V7 ‘ls -l’ printed only the user name, not the user and group names together.
-r
Reverse the sort order, be it alphabetic for filenames or by time.
-s
Print the size of the file in 512-byte blocks. The V7 ls(1) manpage states that indirect blocks—blocks used by the filesystem for locating the data blocks of large files—are also included in the computation, but, as we shall see, this statement was incorrect.
-t
Sort the output by modification time, most recent first, instead of by name.
-u
Use the access time instead of the modification time with -t and/or -l.
208
7.2 V7 ls Code
209
The biggest differences between V7 ls and modern ls concern the -a option and the -l option. Modern systems omit all dot files unless -a is given, and they include both user and group names in the -l long listing. On modern systems, -g is taken to mean print only the group name, and -o means print only the user name. For what it’s worth, GNU ls has over 50 options!
7.2 V7 ls Code The file /usr/src/cmd/ls.c in the V7 distribution contains the code. It is all of 425 lines long. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38
/* * list file or directory */ #include #include #include #include
#define NFILES 1024 FILE *pwdf, *dirf; char stdbuf[BUFSIZ]; struct lbuf { union { char lname[15]; char *namep; } ln; char ltype; short lnum; short lflags; short lnl; short luid; short lgid; long lsize; long lmtime; };
Collects needed info
int aflg, dflg, lflg, sflg, tflg, uflg, iflg, fflg, gflg, cflg; int rflg = 1; long year; Global variables: auto init to 0 int flags; int lastuid = -1; char tbuf[16]; long tblocks; int statreq; struct lbuf *flist[NFILES]; struct lbuf **lastp = flist;
Chapter 7 • Putting It All Together: ls
210
39 40 41 42 43 44 45 46 47
struct char
lbuf *dotp
char struct char long
*makename(); lbuf *gstat(); *ctime(); nblock();
#define ISARG
**firstp = flist; = ".";
char *makename(char *dir, char *file); struct lbuf *gstat(char *file, int argfl); char *ctime(time_t *t); long nblock(long size);
0100000
The program starts with file inclusions (lines 5–8) and variable declarations. The struct lbuf (lines 14–27) encapsulates the parts of the struct stat that are of interest to ls. We see later how this structure is filled. The variables aflg, dflg, and so on (lines 29 and 30) all indicate the presence of the corresponding option. This variable naming style is typical of V7 code. The flist, lastp, and firstp variables (lines 37–39) represent the files that ls reports information about. Note that flist is a fixed-size array, allowing no more than 1024 files to be processed. We see shortly how all these variables are used. After the variable declarations come function declarations (lines 42–45), and then the definition of ISARG, which distinguishes a file named on the command line from a file found when a directory is read. 49 50 51 52 53 54 55 56 57 58 59 60 61 62
main(argc, argv) char *argv[]; { int i; register struct lbuf *ep, **ep1; register struct lbuf **slastp; struct lbuf **epp; struct lbuf lb; char *t; int compar();
int main(int argc, char **argv)
Variable and function declarations
setbuf(stdout, stdbuf); time(&lb.lmtime); Get current time year = lb.lmtime - 6L*30L*24L*60L*60L; /* 6 months ago */
The main() function starts by declaring variables and functions (lines 52–58), setting the buffer for standard output, retrieving the time of day (lines 60–61), and computing the seconds-since-the-Epoch value for approximately six months ago (line 62). Note that all the constants have the L suffix, indicating the use of long arithmetic. 63 64 65 66
if (--argc > 0 && *argv[1] == '-') { argv++; while (*++*argv) switch (**argv) {
Parse options
7.2 V7 ls Code
67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118
}
211
case 'a': aflg++; continue;
All directory entries
case 's': sflg++; statreq++; continue;
Size in blocks
case 'd': dflg++; continue;
Directory info, not contents
case 'g': gflg++; continue;
Group name instead of user name
case 'l': lflg++; statreq++; continue;
Long listing
case 'r': rflg = -1; continue;
Reverse sort order
case 't': tflg++; statreq++; continue;
Sort by time, not name
case 'u': uflg++; continue;
Access time, not modification time
case 'c': cflg++; continue;
Inode change time, not modification time
case 'i': iflg++; continue;
Include inode number
case 'f': fflg++; continue;
Force reading each arg as directory
default: continue; } argc--;
Ignore unknown option letters
Chapter 7 • Putting It All Together: ls
212
Lines 63–118 parse the command-line options. Note the manual parsing code: getopt() hadn’t been invented yet. The statreq variable is set to true when an option requires the use of the stat() system call.
Avoiding an unnecessary stat() call on each file is a big performance win. The stat() call was particularly expensive, because it could involve a disk seek to the inode location, a disk read to read the inode, and then a disk seek back to the location of the directory contents (in order to continue reading directory entries). Modern systems have the inodes in groups, spread out throughout a filesystem instead of clustered together at the front. This makes a noticeable performance improvement. Nevertheless, stat() calls are still not free; you should use them as needed, but not any more than that. 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135
if (fflg) { aflg++; lflg = 0; sflg = 0; tflg = 0; statreq = 0; } if(lflg) { t = "/etc/passwd"; if(gflg) t = "/etc/group"; pwdf = fopen(t, "r"); } if (argc==0) { argc++; argv = &dotp - 1; }
-f overrides -l, -s, -t, adds -a
Open password or group file
Use current dir if no args
Lines 119–125 handle the -f option, turning off -l, -s, -t, and statreq. Lines 126–131 handle -l, setting the file to be read for user or group information. Remember that the V7 ls shows only one or the other, not both. If no arguments are left, lines 132–135 set up argv such that it points at a string representing the current directory. The assignment ‘argv = &dotp - 1’ is valid, although unusual. The ‘- 1’ compensates for the ‘++argv’ on line 137. This avoids special case code for ‘argc == 1’ in the main part of the program.
7.2 V7 ls Code
136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161
213
for (i=0; i < argc; i++) { Get info about each file if ((ep = gstat(*++argv, 1))==NULL) continue; ep->ln.namep = *argv; ep->lflags |= ISARG; } qsort(firstp, lastp - firstp, sizeof *lastp, compar); slastp = lastp; Main code, see text for (epp=firstp; eppltype=='d' && dflg==0 || fflg) { if (argc>1) printf("\n%s:\n", ep->ln.namep); lastp = slastp; readdir(ep->ln.namep); if (fflg==0) qsort(slastp,lastp - slastp,sizeof *lastp,compar); if (lflg || sflg) printf("total %D\n", tblocks); for (ep1=slastp; ep1ltype=='d' && dflg==0 || fflg) ... Line 146. If the file type is directory and if -d was not provided or if -f was, then ls has to read the directory instead of printing information about the directory
itself.
Chapter 7 • Putting It All Together: ls
214
if (argc>1) printf("\n%s:\n", ep->ln.namep)
Lines 147–148. Print the directory name and a colon if multiple files were named on the command line. lastp = slastp; readdir(ep->ln.namep)
Lines 149–150. Reset lastp from slastp. The flist array acts as a two-level stack of filenames. The command-line arguments are kept in firstp through slastp - 1. When readdir() reads a directory, it puts the struct lbuf structures for the directory contents onto the stack, starting at slastp and going through lastp. This is illustrated in Figure 7.1. firstp
struct lbuf *
slastp
struct lbuf *
struct lbuf *
From command line
struct lbuf *
lastp
struct lbuf *
struct lbuf *
flist array
From readdir()
FIGURE 7.1 The flist array as a two-level stack if (fflg==0) qsort(slastp,lastp - slastp,sizeof *lastp,compar) Lines 151–152. Sort the subdirectory entries if -f is not in effect. if (lflg || sflg) printf("total %D\n", tblocks)
Lines 153–154. Print the total number of blocks used by files in the directory, for -l or -s. This total is kept in the variable tblocks, which is reset for each directory. The %D format string for printf() is equivalent to %ld on modern systems; it means “print a long integer.” (V7 also had %ld, see line 192.) for (ep1=slastp; ep1lnum == -1) return; if (iflg) printf("%5u ", p->lnum); if (sflg) printf("%4D ", nblock(p->lsize));
215
void pentry(struct lbuf *ap) Unused historical artifact from V6 ls
Inode number Size in blocks
The pentry() routine prints information about a file. Lines 172–173 check whether the lnum field is -1, and return if so. When ‘p->lnum == -1’ is true, the struct lbuf is not valid. Otherwise, this field is the file’s inode number. Lines 174–175 print the inode number if -i is in effect. Lines 176–177 print the total number of blocks if -s is in effect. (As we see below, this number may not be accurate.) 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202
if (lflg) { Long listing: putchar(p->ltype); – File type pmode(p->lflags); – Permissions printf("%2d ", p->lnl); – Link count t = p->luid; if(gflg) t = p->lgid; if (getname(t, tbuf)==0) printf("%-6.6s", tbuf); – User or group else printf("%-6d", t); if (p->ltype=='b' || p->ltype=='c') – Device: major and minor numbers printf("%3d,%3d", major((int)p->lsize), minor((int)p->lsize)); else printf("%7ld", p->lsize); – Size in bytes cp = ctime(&p->lmtime); if(p->lmtime < year) – Modification time printf(" %-7.7s %-4.4s ", cp+4, cp+20); else printf(" %-12.12s ", cp+4); } if (p->lflags&ISARG) – Filename printf("%s\n", p->ln.namep); else printf("%.14s\n", p->ln.lname); }
Chapter 7 • Putting It All Together: ls
216
Lines 178–197 handle the -l option. Lines 179–181 print the file’s type, permissions, and number of links. Lines 182–184 set t to the user ID or the group ID, based on the -g option. Lines 185–188 retrieve the corresponding name and print it if available. Otherwise, the program prints the numeric value. Lines 189–192 check whether the file is a block or character device. If it is, they print the major and minor device numbers, extracted with the major() and minor() macros. Otherwise, they print the file’s size. Lines 193–196 print the time of interest. If it’s older than six months, the code prints the month, day, and year. Otherwise, it prints the month, day, and time (see Section 6.1.3.1, “Simple Time Formatting: asctime() and ctime(),” page 170, for the format of ctime()’s result). Finally, lines 198–201 print the filename. For a command-line argument, we know it’s a zero-terminated string, and %s can be used. For a file read from a directory, it may not be zero-terminated, and thus an explicit precision, %.14s, must be used. 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232
getname(uid, buf) int uid; char buf[]; { int j, c, n, i; if (uid==lastuid) return(0); if(pwdf == NULL) return(-1); rewind(pwdf); lastuid = -1; do { i = 0; j = 0; n = 0; while((c=fgetc(pwdf)) != '\n') { if (c==EOF) return(-1); if (c==':') { j++; c = '0'; } if (j==0) buf[i++] = c; if (j==2) n = n*10 + c - '0'; } } while (n != uid);
int getname(int uid, char buf[])
Simple caching, see text Safety check Start at front of file Index in buf array Counts fields in line Converts numeric value Read lines Count fields
First field is name Third field is numeric ID Keep searching until ID found
7.2 V7 ls Code
233 234 235 236
217
buf[i++] = '\0'; lastuid = uid; return(0); }
The getname() function converts a user or group ID number into the corresponding name. It implements a simple caching scheme; if the passed-in uid is the same as the global variable lastuid, then the function returns 0, for OK; the buffer will already contain the name (lines 210–211). lastuid is initialized to -1 (line 33), so this test fails the first time getname() is called. is already open on either /etc/passwd or /etc/group (see lines 126–130). The code here checks that the open succeeded and returns -1 if it didn’t (lines 212–213). pwdf
Surprisingly, ls does not use getpwuid() or getgrgid(). Instead, it takes advantage of the facts that the format of /etc/passwd and /etc/group is identical for the first three fields (name, password, numeric ID) and that both use a colon as separator. Lines 216–232 implement a linear search through the file. j counts the number of colons seen so far: 0 for the name and 2 for the ID number. Thus, while scanning the line, it fills in both the name and the ID number. Lines 233–235 terminate the name buffer, set the global lastuid to the found ID number, and return 0 for OK. 238 239 240 241 242 243
long nblock(size) long size; { return((size+511)>>9); }
long nblock(long size)
The nblock() function reports how many disk blocks the file uses. This calculation is based on the file’s size as returned by stat(). The V7 block size was 512 bytes—the size of a physical disk sector. The calculation on line 242 looks a bit scary. The ‘>>9’ is a right-shift by nine bits. This divides by 512, to give the number of blocks. (On early hardware, a right-shift was much faster than division.) So far, so good. Now, a file of even one byte still takes up a whole disk block. However, ‘1 / 512’ comes out as zero (integer division truncates), which is incorrect. This explains the ‘size+511’. By adding 511, the code ensures that the sum produces the correct number of blocks when it is divided by 512.
Chapter 7 • Putting It All Together: ls
218
This calculation is only approximate, however. Very large files also have indirect blocks. Despite the claim in the V7 ls(1) manpage, this calculation does not account for indirect blocks. Furthermore, consider the case of a file with large holes (created by seeking way past the end of the file with lseek()). Holes don’t occupy disk blocks; however, this is not reflected in the size value. Thus, the calculation produced by nblock(), while usually correct, could produce results that are either smaller or larger than the real case. For these reasons, the st_blocks member was added into the struct stat at 4.2 BSD, and then picked up for System V and POSIX. 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275
int int int int int int int int int
m1[] m2[] m3[] m4[] m5[] m6[] m7[] m8[] m9[]
= = = = = = = = =
{ { { { { { { { {
1, 1, 2, 1, 1, 2, 1, 1, 2,
S_IREAD>>0, 'r', '-' }; S_IWRITE>>0, 'w', '-' }; S_ISUID, 's', S_IEXEC>>0, 'x', '-' }; S_IREAD>>3, 'r', '-' }; S_IWRITE>>3, 'w', '-' }; S_ISGID, 's', S_IEXEC>>3, 'x', '-' }; S_IREAD>>6, 'r', '-' }; S_IWRITE>>6, 'w', '-' }; S_ISVTX, 't', S_IEXEC>>6, 'x', '-' };
int *m[] = { m1, m2, m3, m4, m5, m6, m7, m8, m9}; pmode(aflag) { register int **mp;
void pmode(int aflag)
flags = aflag; for (mp = &m[0]; mp < &m[sizeof(m)/sizeof(m[0])];) select(*mp++); } select(pairp) register int *pairp; { register int n; n = *pairp++; while (--n>=0 && (flags&*pairp++)==0) pairp++; putchar(*pairp); }
void select(register int *pairp)
7.2 V7 ls Code
219
Lines 245–275 print the file’s permissions. The code is compact and rather elegant; it requires careful study. • Lines 245–253: The arrays m1 through m9 encode the permission bits to check for along with the corresponding characters to print. There is one array per character to print in the file mode. The first element of each array is the number of (permission, character) pairs encoded in that particular array. The final element is the character to print in the event that none of the given permission bits are found. Note also how the permissions are specified as ‘I_READ>>0’, ‘I_READ>>3’, ‘I_READ>>6’, and so on. The individual constants for each bit (S_IRUSR, S_IRGRP, etc.) had not been invented yet. (See Table 4.5 in Section 4.6.1, “Specifying Initial File Permissions,” page 106.) • Line 255: The m array points to each of the m1 through m9 arrays. • Lines 257–264: The pmode() function first sets the global variable flags to the passed-in parameter aflag. It then loops through the m array, passing each element to the select() function. The passed-in element represents one of the m1 to m9 arrays. • Lines 266–275: The select() function understands the layout of each m1 through m9 array. n is the number of pairs in the array (the first element); line 271 sets it. Lines 272–273 look for permission bits, checking the global variable flags set previously on line 261. Note the use of the ++ operator, both in the loop test and in the loop body. The effect is to skip over pairs in the array as long as the permission bit in the first element of the pair is not found in flags. When the loop ends, either the permission bit has been found, in which case pairp points at the second element of the pair, which is the correct character to print, or it has not been found, in which case pairp points at the default character. In either case, line 274 prints the character that pairp points to.
Chapter 7 • Putting It All Together: ls
220
A final point worth noting is that in C, character constants (such as 'x') have type not char.1 So there’s no problem putting such constants into an integer array; everything works correctly. int, 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295
char * makename(dir, file) char *dir, *file; { static char dfile[100]; register char *dp, *fp; register int i;
char *makename(char *dir, char *file)
dp = dfile; fp = dir; while (*fp) *dp++ = *fp++; *dp++ = '/'; fp = file; for (i=0; ilnum != -1) ep->lnum = dentry.d_ino; for (j=0; jln.lname[j] = dentry.d_name[j]; } fclose(dirf); }
Lines 297–325 define the readdir() function, whose job is to read the contents of directories named on the command line. Lines 304–307 open the directory for reading, returning if fopen() fails. Line 308 initializes the global variable tblocks to 0. This was used earlier (lines 153–154) to print the total number of blocks used by files in a directory. Lines 309–323 are a loop that reads directory entries and adds them to the flist array. Lines 310–311 read one entry, exiting the loop upon end-of-file. Lines 312–315 skip uninteresting entries. If the inode number is zero, this slot isn’t used. Otherwise, if -a was not given and the filename is either ‘.’ or ‘..’, skip it. Lines 316–318 call gstat() with the full name of the file, and a second argument of false, indicating that it’s not from the command line. gstat() updates the global lastp pointer and the flist array. A NULL return value indicates some sort of failure. Lines 319–322 save the inode number and name in the struct lbuf. If ep->lnum comes back from gstat() set to -1, it means that the stat() operation on the file failed. Finally, line 324 closes the directory. The following function, gstat() (lines 327–398), is the core function for the operation of retrieving and storing file information.
Chapter 7 • Putting It All Together: ls
222
327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355
struct lbuf * gstat(file, argfl) char *file; { extern char *malloc(); struct stat statb; register struct lbuf *rep; static int nomocore;
struct lbuf *gstat(char *file, int argfl)
if (nomocore) Ran out of memory earlier return(NULL); rep = (struct lbuf *)malloc(sizeof(struct lbuf)); if (rep==NULL) { fprintf(stderr, "ls: out of memory\n"); nomocore = 1; return(NULL); } if (lastp >= &flist[NFILES]) { Check whether too many files given static int msg; lastp--; if (msg==0) { fprintf(stderr, "ls: too many files\n"); msg++; } } *lastp++ = rep; Fill in information rep->lflags = 0; rep->lnum = 0; rep->ltype = '-'; Default file type
The static variable nomocore [sic] indicates that malloc() failed upon an earlier call. Since it’s static, it’s automatically initialized to 0 (that is, false). If it’s true upon entry, gstat() just returns NULL. Otherwise, if malloc() fails, ls prints an error message, sets nomocore to true, and returns NULL (lines 334–343). Lines 344–351 make sure that there is still room left in the flist array. If not, ls prints a message (but only once; note the use of the static variable msg), and then reuses the last slot in flist. Line 352 makes the slot lastp points to point to the new struct lbuf (rep). This also updates lastp, which is used for sorting in main() (lines 142 and 152). Lines 353–355 set default values for the flags, inode number, and type fields in the struct lbuf.
7.2 V7 ls Code
356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398
if (argfl || statreq) { if (stat(file, &statb)lnum = statb.st_ino; rep->lsize = statb.st_size; switch(statb.st_mode&S_IFMT) {
223
stat() failed
stat() OK, copy info
case S_IFDIR: rep->ltype = 'd'; break; case S_IFBLK: rep->ltype = 'b'; rep->lsize = statb.st_rdev; break; case S_IFCHR: rep->ltype = 'c'; rep->lsize = statb.st_rdev; break; } rep->lflags = statb.st_mode & ~S_IFMT; rep->luid = statb.st_uid; rep->lgid = statb.st_gid; rep->lnl = statb.st_nlink; if(uflg) rep->lmtime = statb.st_atime; else if (cflg) rep->lmtime = statb.st_ctime; else rep->lmtime = statb.st_mtime; tblocks += nblock(statb.st_size); } return(rep); }
Lines 356–396 handle the call to stat(). If this is a command-line argument or if statreq is true because of an option, the code fills in the struct lbuf as follows: • Lines 357–366: Call stat() and if it fails, print an error message and set values as appropriate, then return NULL (expressed as 0).
Chapter 7 • Putting It All Together: ls
224
• Lines 367–368: Set the inode number and size fields from the struct stat if the stat() succeeded. • Lines 369–384: Handle the special cases of directory, block device, and character device. In all cases the code updates the ltype field. For devices, the lsize value is replaced with the st_rdev value. • Lines 385–388: Fill in the lflags, luid, lgid, and lnl fields from the corresponding fields in the struct stat. Line 385 removes the file-type bits, leaving the 12 permissions bits (read/write/execute for user/group/other, and setuid, setgid, and save-text). • Lines 389–394: Based on command-line options, use one of the three time fields from the struct stat for the lmtime field in the struct lbuf. • Line 395: Update the global variable tblocks with the number of blocks in the file. 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425
compar(pp1, pp2) struct lbuf **pp1, **pp2; { register struct lbuf *p1, *p2;
int compar(struct lbuf **pp1, struct lbuf **pp2)
p1 = *pp1; p2 = *pp2; if (dflg==0) { if (p1->lflags&ISARG && p1->ltype=='d') { if (!(p2->lflags&ISARG && p2->ltype=='d')) return(1); } else { if (p2->lflags&ISARG && p2->ltype=='d') return(-1); } } if (tflg) { if(p2->lmtime == p1->lmtime) return(0); if(p2->lmtime > p1->lmtime) return(rflg); return(-rflg); } return(rflg * strcmp(p1->lflags&ISARG? p1->ln.namep: p1->ln.lname, p2->lflags&ISARG? p2->ln.namep: p2->ln.lname)); }
The compar() function is dense: There’s a lot happening in little space. The first thing to remember is the meaning of the return value: A negative value means that the first file should sort to an earlier spot in the array than the second, zero means the files
7.3 Summary
225
are equal, and a positive value means that the second file should sort to an earlier spot than the first. The next thing to understand is that ls prints the contents of directories after it prints information about files. Thus the result of sorting should be that all directories named on the command line follow all files named on the command line. Finally, the rflg variable helps implement the -r option, which reverses the sorting order. It is initialized to 1 (line 30). If -r is used, rflg is set to -1 (lines 89–91). The following pseudocode describes the logic of compar(); the line numbers in the left margin correspond to those of ls.c: 407 408 409 410
411 412 413
416
if ls has to read directories # dflg == 0 if p1 is a command-line arg and p1 is a directory if p2 is not a command-line arg and is not a directory return 1 # first comes after second else fall through to time test else # p1 is not a command-line directory if p2 is a command-line arg and is a directory return -1 # first comes before second else fall through to time test
421
if sorting is based on time # tflg is true # compare times: if p2's time is equal to p1's time return 0 if p2's time > p1's time return the value of rflg (positive or negative) # p2's time < p1's time return opposite of rflg (negative or positive)
423 424
Multiply rflg by the result of strcmp() on the two names and return the result
417 418 419 420
The arguments to strcmp() on lines 423–424 look messy. What’s going on is that different members of the ln union in the struct lbuf must be used, depending on whether the filename is a command-line argument or was read from a directory.
7.3 Summary • The V7 ls is a relatively small program, yet it touches on many of the fundamental aspects of Unix programming: file I/O, file metadata, directory contents, users and groups, time and date values, sorting, and dynamic memory management.
Chapter 7 • Putting It All Together: ls
226
• The most notable external difference between V7 ls and modern ls is the treatment of the -a and -l options. The V7 version has many fewer options than do modern versions; a noticeable lack is the -R recursive option. • The management of flist is a clean way to use the limited memory of the PDP-11 architecture yet still provide as much information as possible. The struct lbuf nicely abstracts the information of interest from the struct stat; this simplifies the code considerably. The code for printing the nine permission bits is compact and elegant. • Some parts of ls use surprisingly small limits, such as the upper bound of 1024 on the number of files or the buffer size of 100 in makename().
Exercises 1. Consider the getname() function. What happens if the requested ID number is 216 and the following two lines exist in /etc/passwd, in this order: joe:xyzzy:2160:10:Joe User:/usr/joe:/bin/sh jane:zzyxx:216:12:Jane User:/usr/jane:/bin/sh
2. Consider the makename() function. Could it use sprintf() to make the concatenated name? Why or why not? 3. Are lines 319–320 in readdir() really necessary? 4. Take the stat program you wrote for the exercises in “Exercises” for Chapter 6, page 205. Add the nblock() function from the V7 ls, and print the results along with the st_blocks field from the struct stat. Add a visible marker when they’re different. 5. How would you grade the V7 ls on its use of malloc()? (Hint: how often is free() called? Where should it be called?) 6. How would you grade the V7 ls for code clarity? (Hint: how many comments are there?) 7. Outline the steps you would take to adapt the V7 ls for modern systems.
8 Filesystems and Directory Walks
In this chapter 8.1 Mounting and Unmounting Filesystems
page 228
8.2 Files for Filesystem Administration
page 238
8.3 Retrieving Per-Filesystem Information
page 244
8.4 Moving Around in the File Hierarchy
page 256
8.5 Walking a File Tree: GNU du
page 269
8.6 Changing the Root Directory: chroot()
page 276
8.7 Summary
page 277
Exercises
page 278
227
his chapter completes the discussion of Linux (and Unix) filesystems and directories. We first describe how a disk partition containing a filesystem is added to (and removed from) the logical filesystem namespace, such that in general a user need neither know nor care where a file is physically located, along with the APIs for working with filesystems.
T
We then describe how to move around within the hierarchical file namespace, how to retrieve the full pathname of the current working directory, and how to easily process arbitrary directory hierarchies (trees), using the nftw() function. Finally, we describe the specialized but important chroot() system call.
8.1 Mounting and Unmounting Filesystems The unified hierarchical file namespace is a great strength of the Linux/Unix design. This section looks at how administrative files, commands, and the operating system cooperate to build the namespace from separate physical devices that contain file data and metadata.
8.1.1 Reviewing the Background Chapter 5, “Directories and File Metadata,” page 117, introduced inodes for file metadata and described how directory entries link filenames to inodes. It also described partitions and filesystems, and you saw that hard links are restricted to working within a single filesystem because directories contain only inode numbers and inode numbers are not unique across the entire set of in-use filesystems. Besides inodes and data blocks, filesystems also contain one or more copies of the superblock. This is a special disk block that describes the filesystem; its information is updated as the filesystem itself changes. For example, it contains counts of free and used inodes, free and used blocks, and other information. It also includes a magic number: a unique special value in a special location that identifies the type of the filesystem. (We’ll see how this is relevant, shortly.) Making a partition that contains a filesystem available for use is called mounting the filesystem. Removing a filesystem from use is called, not surprisingly, unmounting the filesystem.
228
8.1 Mounting and Unmounting Filesystems
229
These two jobs are accomplished with the mount and umount [sic] programs, named for the corresponding system calls. Every Unix system’s mount() system call has a different interface. Because mounting and unmounting are considered implementation issues, POSIX purposely does not standardize these system calls. You mount a filesystem onto a directory; such a directory is referred to as the filesystem’s mount point. By convention the directory should be empty, but nothing enforces this. However, if the mount point is not empty, all of its contents become completely inaccessible while a filesystem is mounted on it.1 The kernel maintains a unique number, known as the device number, that identifies each mounted partition. For this reason, it is the (device, inode) pair that together uniquely identifies a file; when the struct stat structures for two filenames indicate that both numbers are the same, you can be sure that they do refer to the same file. As mentioned earlier, user-level software places the inode structures and other metadata onto a disk partition, thereby creating the filesystem. This same software creates an initial root directory for the filesystem. Thus, we have to make a distinction between “the root directory named /,” which is the topmost directory in the hierarchical filename namespace, and “the root directory of a filesystem,” which is each filesystem’s individual topmost directory. The / directory is also the “root directory” of the “root filesystem.” For reasons described in the sidebar, a filesystem’s root directory always has inode number 2 (although this is not formally standardized). Since there can be multiple filesystems, each one’s root directory has the same inode number, 2. When resolving a pathname, the kernel knows where each filesystem is mounted and arranges for the mount point’s name to refer to the root directory of the mounted filesystem. Furthermore, ‘..’ in the root of a mounted filesystem is made to refer to the parent directory of the mount point. Figure 8.1 shows two filesystems: one for the root directory, and one for /usr, before /usr is mounted. Figure 8.2 shows the situation after /usr is mounted.
1 GNU/Linux and Solaris allow you to mount one file on top of another; this has advanced uses, which we don’t
otherwise discuss.
Chapter 8 • Filesystems and Directory Walks
230
Root filesystem
/ . .. bin dev usr ...
Dev: Dev: Dev: Dev: Dev:
302h/770d 302h/770d 302h/770d 302h/770d 302h/770d ...
2 Inode: 2 Inode: Inode: 547288 Inode: 64385 Inode: 338017
/dev/hda1 bin . .. ls sh ...
usr Dev: Dev: Dev: Dev:
302h/770d 302h/770d 302h/770d 302h/770d ...
Inode: 547288 2 Inode: Inode: 547338 Inode: 547322
. ..
Dev: 302h/770d Dev: 302h/770d
Inode: 57 Inode: 2
Mount point for /usr is empty
/usr filesystem
/ . .. bin tmp share ...
Dev: Dev: Dev: Dev: Dev:
305h/773d 305h/773d 305h/773d 305h/773d 305h/773d ...
2 Inode: 2 Inode: Inode: 354113 Inode: 338232 Inode: 450689
/dev/hda3
FIGURE 8.1 Separate filesystems, before mounting
The / directory, the root of the entire logical hierarchy, is special in an additional way: /. and /.. refer to the same directory; this is not true of any other directory on the system. (Thus, after something like ‘cd /../../../..’, you’re still in /.) This behavior is implemented in a simple fashion: Both /. and /.. are hard links to the filesystem’s root directory. (You can see this in both Figure 8.1 and Figure 8.2.) Every filesystem works this way, but the kernel treats / specially and does not treat as a special case the ‘..’ directory for the filesystem mounted on /.
8.1 Mounting and Unmounting Filesystems
231
/ . .. bin dev usr ...
Dev: Dev: Dev: Dev: Dev:
302h/770d 302h/770d 302h/770d 302h/770d 302h/770d ...
Dot-dot is on root filesystem
usr usr
bin . .. ls sh ...
2 Inode: 2 Inode: Inode: 547288 Inode: 64385 Inode: 338017
Dev: Dev: Dev: Dev:
302h/770d 302h/770d 302h/770d 302h/770d ...
Inode: 547288 2 Inode: Inode: 547338 Inode: 547322
. . .. .. bin tmp share ...
Dev Inode : 305h/773d Inode : 2 : 302h/770d : 57 Dev : 302h/770d Inode 2 Dev Inode : 302h/770d : :2 Dev Dev: 305h/773d Inode: 354113 Dev: 305h/773d Inode: 338232 Dev: 305h/773d Inode: 450689 ...
FIGURE 8.2 Separate filesystems, after mounting
Root Inode Numbers The inode number for the root directory of a filesystem is always 2. Why is that? The answer has to do with both technology and history. As mentioned in Section 5.3, “Reading Directories,” page 132, a directory entry with an inode number of zero indicates an unused, or empty, slot. So inode 0 cannot be used for a real file or directory. OK, so what about inode 1? Well, particularly in the 1970s and 1980s, disks were not as well made as they are now. When you bought a disk, it came with a (paper) list of bad blocks—known locations on the disk that were not usable. Each operating system had to have a way to keep track of those bad blocks and avoid using them. Under Unix, you did this by creating a special-purpose file, whose data blocks were the ones known to be bad. This file was attached on inode 1, leaving inode 2 as the first inode usable for regular files or directories. Modern disk drives have considerable built-in electronics and handle bad blocks on their own. Thus, technically, it would be feasible to use inode 1 for a file. However, since so much Unix software assumes that inode 2 is the inode for filesystem root directories, Linux follows this convention as well. (However, Linux sometimes uses inode 1 for nonnative filesystems, such as vfat or /proc.)
Chapter 8 • Filesystems and Directory Walks
232
8.1.2 Looking at Different Filesystem Types NOTE The discussion in this section is specific to Linux. However, most modern Unix systems have similar features. We encourage you to explore your system’s documentation.
Historically, V7 Unix supported only a single filesystem type; every partition’s metadata and directory organization were structured the same way. 4.1 BSD used a filesystem with the same structure as that of V7, but with a 1024-byte block size instead of a 512-byte one. 4.2 BSD introduced the “BSD Fast Filesystem,” which dramatically changed the layout of inodes and data on disk and enabled the use of much larger block sizes. (In general, using larger contiguous blocks of data provides better throughput, especially for file reads.) Through 4.3 BSD and System V Release 2 in the early and mid-1980s, Unix systems continued to support just one filesystem type. To switch a computer from one filesystem to another,2 you had to first back up each filesystem to archival media (9-track tape), upgrade the system, and then restore the data. In the mid-1980s, Sun Microsystems developed a kernel architecture that made it possible to use multiple filesystem architectures at the same time. This design was implemented for their SunOS operating system, primarily to support Sun’s Network File System (NFS). However, as a consequence it was also possible to support multiple ondisk architectures. System V Release 3 used a similar architecture to support the Remote File System (RFS), but it continued to support only one on-disk architecture.3 (RFS was never widely used and is now only a historical footnote.) Sun’s general design became popular and widely implemented in commercial Unix systems, including System V Release 4. Linux and BSD systems use a variant of this design to support multiple on-disk filesystem formats. In particular, it’s common for all Unix variants on Intel x86 hardware to be able to mount MS-DOS/Windows FAT filesystems, including those supplying long filenames, as well as ISO 9660-formatted CD-ROMs.
2 For example, consider upgrading a VAX 11/780 from 4.1 BSD to 4.2 BSD. 3 System V Release 3 supported two different block sizes: 512 bytes and 1024 bytes, but otherwise the disk organi-
zation was the same.
8.1 Mounting and Unmounting Filesystems
233
Linux has several native (that is, on-disk) filesystems. The most popular are the ext2 and ext3 filesystems. Many more filesystem types are available, however. You can find information about most of them in the /usr/src/linux/Documentation/ filesystems/ directory (if you have kernel source installed). Table 8.1 lists the various filesystem names, with brief descriptions of each. The abbreviation “RW” means “read/write” and “RO” means “read only.” TABLE 8.1 Supported in-kernel Linux filesystems (kernel 2.4.x)
Name
Support Description
afs
RW
The Andrew File System.
adfs
RW
Acorn Advanced Disc Filing System.
affs
RO, RW Amiga Fast File System. Read only vs. read/write depends upon the version of the filesystem.
autofs
RW
Filesystem for interacting with the automounter daemon.
befs
RO
BeOS Filesystem. Marked as alpha software.
bfs
RW
SCO UnixWare Boot Filesystem.
binfmt_misc RW
Special filesystem for running interpreters on compiled files (for example, Java files).
efs
RW
A filesystem developed for SGI’s Unix variant named Irix.
coda
RW
An experimental distributed filesystem developed at CMU.
cramfs
RO
A small filesystem for storing files in ROM.
devfs
RW
A way to dynamically provide device files for /dev (obsolete).
devpts
RW
Special filesystem for pseudo-ttys.
ext2
RW
The Second Extended Filesystem. This is the default GNU/Linux filesystem, although some distributions now use ext3.
ext3
RW
The ext2 filesystem with journaling.
hfs
RW
Apple Mac OS Hierarchical File System.
hpfs
RW
The OS/2 High Performance File System.
intermezzo RW
An experimental distributed filesystem for working while disconnected. See the InterMezzo web site (http://www.inter-mezzo.org).
Chapter 8 • Filesystems and Directory Walks
234
TABLE 8.1 (Continued)
Name
Support Description
jffs
RW
Journaled Flash Filesystem (for embedded systems).
jffs2
RW
Journaled Flash Filesystem 2 (also for embedded systems).
iso9660
RO
The ISO 9660 CD-ROM filesystem. The Rock Ridge extensions are also supported, making a CD-ROM that uses them look like a normal (but read-only) filesystem.
jfs
RW
IBM’s Journaled File System for Linux.
ncp
RW
Novell’s NCP protocol for NetWare; a remote filesystem client.
ntfs
RO
Support for Windows NTFS filesystem.
openpromfs RO
A /proc filesystem for the PROM on SPARC systems.
proc
RW
Access to per-process and kernel information.
qnx4
RW
The QNX4 (a small, real-time operating system) filesystem.
ramfs
RW
A filesystem for creating RAM disks.
reiserfs
RW
An advanced journaling filesystem.
romfs
RO
A filesystem for creating simple read-only RAM disks.
smbfs
RW
Client support for SMB filesystems (Windows file shares).
sysv
RW
The System V Release 2, Xenix, Minix, and Coherent filesystems. coherent, minix, and xenix are aliases.
tmpfs
RW
A ramdisk filesystem, supporting dynamic growth.
udf
RO
The UDF filesystem format used by DVD-ROMs.
ufs
RO, RW The BSD Fast Filesystem; read/write for modern systems.
umsdos
RW
An extension to vfat making it look more like a Unix filesystem.
usbfs
RW
A special filesystem for working with USB devices. The original name was usbdevfs and this name still appears, for example, in the output of mount.
vfat
RW
All variants of MS-DOS/Windows FAT filesystems. msdos and fat are components.
vxfs
RW
The Veritas VxFS journaling filesystem.
xfs
RW
A high-performance journaling filesystem developed by SGI for Linux. See the XFS web site (http://oss.sgi.com/projects/xfs/).
8.1 Mounting and Unmounting Filesystems
235
Not all of these filesystems are supported by the mount command; see mount(8) for the list of those that are supported. Journaling is a technique, pioneered in database systems, for improving the performance of file updates, in such a way that filesystem recovery in the event of a crash can be done both correctly and quickly. As of this writing, several different journaling filesystems are available and competing for prominence in the GNU/Linux world. ext3 is one such; it has the advantage of being upwardly compatible with existing ext2 filesystems, and it’s easy to convert a filesystem back and forth between the two types. (See tune2fs(8).) ReiserFS and XFS also have strong followings. The fat, msdos, umsdos, and vfat filesystems all share common code. In general, you should use vfat to mount Windows FAT-32 (or other FAT-xx) partitions, and umsdos if you wish to use a FAT partition as the root filesystem for your GNU/Linux system. The Coherent, MINIX, original System V, and Xenix filesystems all have similar on-disk structures. The sysv filesystem type supports all of them; the four names coherent, minix, sysv, and xenix are aliases one for the other. The coherent and xenix names will eventually be removed. The BSD Fast Filesystem has evolved somewhat over the years. The ufs filesystem supports read/write operation for the version from 4.4 BSD, which is the basis for the three widely used BSD operating systems: FreeBSD, NetBSD, and OpenBSD. It also supports read/write operation for Sun’s Solaris filesystem, for both SPARC and Intel x86 systems. The original BSD format and that from the NeXTStep operating system are supported read-only. The “RO” designations for befs and ntfs mean that filesystems of those types can be mounted and read but files cannot be written on them or removed from them. (This may change with time; check your system’s documentation.) The cramfs, iso9660, romfs, and udf filesystems are marked “RO” because the underlying media are inherently read-only. Two filesystem types no longer exist: ext, which was the original Extended Filesystem, and xiafs, which extended the original MINIX filesystem for longer names and larger
236
Chapter 8 • Filesystems and Directory Walks
file sizes. xiafs and ext2 came out approximately simultaneously, but ext2 eventually became the dominant filesystem.4
8.1.3 Mounting Filesystems: mount The mount command mounts filesystems, splicing their contents into the system file hierarchy at their mount points. Under GNU/Linux, it is somewhat complicated since it has to deal with all the known filesystem types and their options. Normally, only root can run mount, although it’s possible to make exceptions for certain cases, as is discussed later in the chapter. You specify the filesystem type with the -t option: mount [ options ] device mount-point
For example (# is the root prompt): # mount -t iso9660 /dev/cdrom /mnt/cdrom # mount -t vfat /dev/fd0 /mnt/floppy # mount -t nfs files.example.com:/ /mnt/files
Mount CD-ROM Mount MS-DOS floppy Mount NFS filesystem
You can use ‘-t auto’ to force mount to guess the filesystem type. This usually works, although if you know for sure what kind of filesystem you have, it helps to supply the type and avoid the chance that mount will guess incorrectly. mount does this guessing by default, so ‘-t auto’ isn’t strictly necessary. GNU/Linux systems provide a special kind of mounting by means of the loopback device. In this way, a filesystem image contained in a regular file can be mounted as if it were an actual disk device. This capability is very useful, for example, with CD-ROM images. It allows you to create one and try it out, without having to burn it to a writable CD and mount the CD. The following example uses the first CD image from the Red Hat 9 distribution of GNU/Linux: # ls -l shrike-i386-disc1.iso Examine CD image file -rw-r--r-1 arnold devel 668991488 Apr 11 05:13 shrike-i386-disc1.iso # mount -t iso9660 -o ro,loop shrike-i386-disc1.iso /mnt/cdrom Mount it on /mnt/cdrom # cd /mnt/cdrom Go there # ls Look at files autorun README.it RELEASE-NOTES-fr.html dosutils README.ja RELEASE-NOTES.html EULA README.ko RELEASE-NOTES-it.html
4 Source: http://www.ife.ee.ethz.ch/music/software/sag/subsection2_5_4_3.html.
8.1 Mounting and Unmounting Filesystems
GPL images isolinux README README-Accessibility README.de README.es README.fr # cd # umount /mnt/cdrom
README.pt README.pt_BR README.zh_CN README.zh_TW RedHat RELEASE-NOTES RELEASE-NOTES-de.html RELEASE-NOTES-es.html
237
RELEASE-NOTES-ja.html RELEASE-NOTES-ko.html RELEASE-NOTES-pt_BR.html RELEASE-NOTES-pt.html RELEASE-NOTES-zh_CN.html RELEASE-NOTES-zh_TW.html RPM-GPG-KEY TRANS.TBL
Change out Unmount
Being able to mount an ISO 9660 image this way is particularly helpful when you are testing scripts that make CD images. You can create an image in a regular file, mount it, and verify that it’s arranged correctly. Then, once you’re sure it’s correct, you can copy the image to a writable CD (“burn” the CD). The loopback facility is useful for mounting floppy disk images, too.
8.1.4 Unmounting Filesystems: umount The umount command unmounts a filesystem, removing its contents from the system file hierarchy. The usage is as follows: umount file-or-device
The filesystem being unmounted must not be busy. This means that there aren’t any processes with open files on the filesystem and that no process has a directory on the filesystem as its current working directory: $ mount /dev/hda2 on / type ext3 (rw) none on /proc type proc (rw) usbdevfs on /proc/bus/usb type usbdevfs (rw) /dev/hda5 on /d type ext3 (rw) none on /dev/pts type devpts (rw,gid=5,mode=620) none on /dev/shm type tmpfs (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) $ su Password: # cd /d # umount /d umount: /d: device is busy # cd / # umount /d #
Show what’s mounted / is on a real device So is /d
Switch to superuser Password does not echo Make /d the current directory Try to unmount /d Doesn’t work; it’s still in use Change out of /d Try to unmount /d again Silence is golden: unmount worked
Chapter 8 • Filesystems and Directory Walks
238
8.2 Files for Filesystem Administration The /etc/fstab file5 lists filesystems that can be mounted. Most are automatically mounted when the system boots. The format is as follows: device
mount-point
fs-type
options
dump-freq
fsck-pass
(The dump-freq and fsck-pass are administrative features that aren’t relevant to the current discussion.) For example, on our system, the file looks like this: $ cat /etc/fstab # device mount-point /dev/hda3 / /dev/hda5 /d none /dev/pts none /proc none /dev/shm # Windows partition: /dev/hda1 /win /dev/hda3 swap /dev/cdrom /mnt/cdrom /dev/fd0 /mnt/floppy
type ext3 ext3 devpts proc tmpfs
options defaults defaults gid=5,mode=620 defaults defaults
freq 1 1 0 0 0
passno 1 Root filesystem 2 0 0 0
vfat swap iso9660 auto
noauto,defaults,user,uid=2076,gid=10 0 0 defaults 0 0 noauto,owner,ro 0 0 World mountable noauto,owner 0 0 Floppy, same
Comments beginning with # are allowed. Discussion of the various options is provided shortly, in Section 8.2.1, “Using Mount Options,” page 239. This same file format is used for /etc/mtab, which is where mount writes information about filesystems as they are mounted; umount removes information from that file when a filesystem is unmounted: $ cat /etc/mtab /dev/hda2 / ext3 rw 0 0 none /proc proc rw 0 0 usbdevfs /proc/bus/usb usbdevfs rw 0 0 /dev/hda5 /d ext3 rw 0 0 none /dev/pts devpts rw,gid=5,mode=620 0 0 none /dev/shm tmpfs rw 0 0 none /proc/sys/fs/binfmt_misc binfmt_misc rw 0 0 /dev/hda1 /win vfat rw,noexec,nosuid,nodev,uid=2076,gid=10,user=arnold 0 0
The kernel makes (almost) the same information available in /proc/mounts, in the same format: $ cat /proc/mounts rootfs / rootfs rw 0 0 /dev/root / ext3 rw 0 0 /proc /proc proc rw 0 0
5 On GNU/Linux and most systems. Solaris and some systems based on System V Release 4 use /etc/vfstab,
possibly with a different format.
8.2 Files for Filesystem Administration
239
usbdevfs /proc/bus/usb usbdevfs rw 0 0 /dev/hda5 /d ext3 rw 0 0 none /dev/pts devpts rw 0 0 none /dev/shm tmpfs rw 0 0 none /proc/sys/fs/binfmt_misc binfmt_misc rw 0 0 /dev/hda1 /win vfat rw,nosuid,nodev,noexec 0 0
Note that /etc/mtab has some information that /proc/mounts doesn’t. (For example, see the line for the /win mount point.) On the flip side, it’s possible (using ‘mount -f’) to put entries into /etc/mtab that aren’t real (this practice has its uses, see mount(8)). To sum up, /proc/mounts always describes what is really mounted; however, /etc/mtab contains information about mount options that /proc/mounts doesn’t. Thus, to get the full picture, you may have to read both files.
8.2.1 Using Mount Options The mount command supports options that control what operations the kernel will or will not allow for the filesystem. There are a fair number of these. Only two are really useful on the command line: ro
Mount the filesystem read-only. This is necessary for read-only media such as CD-ROMs and DVDs. loop
Use the loopback device for treating a regular file as a filesystem. We showed an example of this earlier (see Section 8.1.3, “Mounting Filesystems: mount,” page 236). Options are passed with the -o command-line option and can be grouped, separated by commas. For example, here is the command line used earlier: mount -t iso9660 -o ro,loop shrike-i386-disc1.iso /mnt/cdrom
The rest of the options are intended for use in /etc/fstab (although they can also be used on the command line). The following list provides the ones we think are most important for day-to-day use. auto, noauto
Filesystems marked auto are to be mounted when the system boots through ‘mount -a’ (mount all filesystems). noauto filesystems must be mounted manually.
Chapter 8 • Filesystems and Directory Walks
240
Such filesystems still appear in /etc/fstab along with the other filesystems. (See, for example, the entry for /win in our /etc/fstab file, shown previously.) defaults
Use the default options rw, suid, dev, exec, auto, nouser, and async. (async is an advanced option that increases I/O throughput.) dev, nodev
Allow (don’t allow) the use of character or block device files on the filesystem. exec, noexec
Allow (don’t allow) execution of binary executables on the filesystem. user, nouser
Allow (don’t allow) any user to mount this filesystem. This is useful for CD-ROMs; even if you’re on a single-user workstation, it’s convenient to not have to switch to root just to mount a CD. Only the user who mounted the filesystem can unmount it. user implies the noexec, nosuid, and nodev options. suid, nosuid
Support (don’t support) the setuid and setgid bits on executables on the filesystem. rw
Mount the filesystem read-write. The nodev, noexec, and nosuid options are particularly valuable for security on floppy-disk and CD-ROM filesystems. Consider a student environment in which students are allowed to mount their own floppies or CDs. It’s trivial to craft a filesystem with a setuid-root shell or a world-writable device file for the hard disk that could let an enterprising user change permissions on system files. Each filesystem has additional options specific to it. One important option for ext2 and ext3 is the grpid option. We defer discussion of this option until Section 11.5.1, “Default Group for New Files and Directories,” page 412. The details for all supported filesystems can be found in the mount(8) manpage. As a concrete example, reconsider the line for the Windows partition on our system: # device /dev/hda1
mount-point /win
type vfat
options freq passno noauto,defaults,user,uid=2076,gid=10 0 0
The noauto option prevents the Windows partition from being mounted at boot time. The defaults option is the same as rw,suid,dev,exec,async. The user
8.2 Files for Filesystem Administration
241
option allows us to mount the filesystem without being root. The uid= and gid= options force the files in /win to belong to us as a regular user so that we don’t need to be root when working on that partition.
8.2.2 Working with Mounted Filesystems: getmntent() Any of /etc/fstab, /etc/mtab, and /proc/mounts can be read programmatically with the getmntent() suite of routines: #include #include
GLIBC
FILE *setmntent(const char *filename, const char *type); struct mntent *getmntent(FILE *filep); int addmntent(FILE *filep, const struct mntent *mnt); int endmntent(FILE *filep); char *hasmntopt(const struct mntent *mnt, const char *opt);
setmntent() opens the file containing mount point entries. The filename argument is the file to open. The type argument is like the second argument to fopen(), indicating read, write, or read/write access. (Consider the mount command, which has to add an entry to /etc/mtab for each filesystem it mounts, and umount, which has to remove one.) The returned value of type FILE * is then used with the rest of
the routines. getmntent() reads through the file, returning a pointer to a static struct mntent, which is filled in with the appropriate values. This static storage is overwritten on each call. It returns NULL when there are no more entries. (This is similar to the
routines for reading the password and group files; see Section 6.3, “User and Group Names,” page 195.) is called to add more information to the end of the open file; it’s intended for use by mount. addmntent()
endmntent() closes the open file; call it when you’re done processing. Don’t just call fclose(); other internal data structures associated with the FILE * variable may
need to be cleaned up. is a more specialized function. It scans the struct mntent passed as the first parameter for a mount option matching the second argument. If the option is found, it returns the address of the matching substring. Otherwise, it returns NULL. hasmntopt()
Chapter 8 • Filesystems and Directory Walks
242
The fields in the struct mntent correspond directly to the fields in the /etc/fstab file. It looks like this: struct mntent { char *mnt_fsname; char *mnt_dir; char *mnt_type; char *mnt_opts; int mnt_freq; int mnt_passno; };
/* /* /* /* /* /*
Device or server for filesystem. */ Directory mounted on. */ Type of filesystem: ufs, nfs, etc. */ Comma-separated options for fs. */ Dump frequency (in days). */ Pass number for `fsck'. */
The normal paradigm for working with mounted filesystems is to write an outer loop that reads /etc/mtab, processing one struct mntent at a time. Our first example, ch08-mounted.c, does exactly that: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
/* ch08-mounted.c --- print a list of mounted filesystems */ /* NOTE: GNU/Linux specific! */ #include #include #include #include
/* for getmntent(), et al. */ /* for getopt() */
void process(const char *filename); void print_mount(const struct mntent *fs); char *myname; /* main --- process options */ int main(int argc, char **argv) { int c; char *file = "/etc/mtab"; /* default file to read */ myname = argv[0]; while ((c = getopt(argc, argv, "f:")) != -1) { switch (c) { case 'f': file = optarg; break; default: fprintf(stderr, "usage: %s [-f fstab-file]\n", argv[0]); exit(1); } } process(file); return 0; }
8.2 Files for Filesystem Administration
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69
243
/* process --- read struct mntent structures from file */ void process(const char *filename) { FILE *fp; struct mntent *fs; fp = setmntent(filename, "r"); /* read only */ if (fp == NULL) { fprintf(stderr, "%s: %s: could not open: %s\n", myname, filename, strerror(errno)); exit(1); } while ((fs = getmntent(fp)) != NULL) print_mount(fs); endmntent(fp); } /* print_mount --- print a single mount entry */ void print_mount(const struct mntent *fs) { printf("%s %s %s %s %d %d\n", fs->mnt_fsname, fs->mnt_dir, fs->mnt_type, fs->mnt_opts, fs->mnt_freq, fs->mnt_passno); }
Unlike most of the programs that we’ve seen up to now, this one is Linux specific. Many Unix systems have similar routines, but they’re not guaranteed to be identical. By default, ch08-mounted reads /etc/mtab, printing the information about each mounted filesystem. The -f option allows you to specify a different file to read, such as /proc/mounts or even /etc/fstab. The main() function processes the command line (lines 23–32) and calls process() on the named file. (This program follows our standard boilerplate.) process(), in turn, opens the file (line 45), and loops over each returned filesystem
(lines 52–53). When done, it closes the file (line 55).
Chapter 8 • Filesystems and Directory Walks
244
The print_mount() function prints the information in the struct mntent. The output ends up being much the same as that of ‘cat /etc/mtab’: $ ch08-mounted Run the program /dev/hda2 / ext3 rw 0 0 none /proc proc rw 0 0 usbdevfs /proc/bus/usb usbdevfs rw 0 0 /dev/hda5 /d ext3 rw 0 0 none /dev/pts devpts rw,gid=5,mode=620 0 0 none /dev/shm tmpfs rw 0 0 none /proc/sys/fs/binfmt_misc binfmt_misc rw 0 0 /dev/hda1 /win vfat rw,noexec,nosuid,nodev,uid=2076,gid=10,user=arnold 0 0
8.3 Retrieving Per-Filesystem Information Printing per-filesystem information is all fine and good, but it’s not exciting. Once we know that a particular mount point represents a filesystem, we want information about the filesystem. This allows us to do things like print the information retrieved by df and ‘df -i’: Show free/used space
$ df Filesystem /dev/hda2 /dev/hda5 none /dev/hda1
1K-blocks 6198436 61431520 256616 8369532
$ df -i Filesystem /dev/hda2 /dev/hda5 none /dev/hda1
Inodes 788704 7815168 64154 0
Used Available Use% Mounted on 4940316 943248 84% / 27618536 30692360 48% /d 0 256616 0% /dev/shm 2784700 5584832 34% /win
Show free/used inodes IUsed IFree IUse% Mounted on 233216 555488 30% / 503243 7311925 7% /d 1 64153 1% /dev/shm 0 0 - /win
8.3.1 POSIX Style: statvfs() and fstatvfs() Early Unix systems had only one kind of filesystem. For them, it was sufficient if df read the superblock of each mounted filesystem, extracted the relevant statistics, and formatted them nicely for printing. (The superblock was typically the second block in the filesystem; the first was the boot block, to hold bootstrapping code.) However, in the modern world, such an approach would be untenable. POSIX provides an XSI extension to access this information. The main function is called
8.3 Retrieving Per-Filesystem Information
245
statvfs(). (The “vfs” part comes from the underlying SunOS technology, later used
in System V Release 4, called a virtual filesystem.) There are two functions: XSI
#include #include int statvfs(const char *path, struct statvfs *buf); int fstatvfs(int fd, struct statvfs *buf);
statvfs() uses a pathname for any file; it returns information about the filesystem containing the file. fstatvfs() accepts an open file descriptor as its first argument; here too, the information returned is about the filesystem containing the open file. The struct statvfs contains the following members: struct statvfs { unsigned long int f_bsize; unsigned long int f_frsize; fsblkcnt_t f_blocks; fsblkcnt_t f_bfree; fsblkcnt_t f_bavail; fsfilcnt_t f_files; fsfilcnt_t f_ffree; fsfilcnt_t f_favail; unsigned long int f_fsid; unsigned long int f_flag; unsigned long int f_namemax; };
Block size Fragment size (‘‘fundamental block size’’) Total number of blocks Total number of free blocks Number of available blocks (≤ f_bfree) Total number of inodes Total number of free inodes Number of available inodes (≤ f_files) Filesystem ID Flags: ST_RDONLY and/or ST_NOSUID Maximum filename length
The information it contains is enough to write df: unsigned long int f_bsize
The block size is the preferred size for doing I/O. The filesystem attempts to keep at least f_bsize bytes worth of data in contiguous sectors on disk. (A sector is the smallest amount of addressable data on the disk. Typically, a disk sector is 512 bytes.) unsigned long int f_frsize
Some filesystems (such as the BSD Fast Filesystem) distinguish between blocks and fragments of blocks. Small files whose total size is smaller than a block reside in some number of fragments. This avoids wasting disk space (at the admitted cost of more complexity in the kernel code). The fragment size is chosen at the time the filesystem is created. fsblkcnt_t f_blocks
The total number of blocks (in units of f_bsize) in the filesystem.
Chapter 8 • Filesystems and Directory Walks
246
fsblkcnt_t f_bfree
The total number of free blocks in the filesystem. fsblkcnt_t f_bavail
The number of blocks that may actually be used. Some filesystems reserve a percentage of the filesystem’s blocks for use by the superuser, in case the filesystem fills up. Modern systems reserve around 5 percent, although this number can be changed by an administrator. (See tune2fs(8) on a GNU/Linux system, and tunefs(8) on Unix systems.) fsfilcnt_t f_files
The total number of inodes (“file serial numbers,” in POSIX parlance) on the filesystem. This number is usually initialized and made permanent when the filesystem is created. fsfilcnt_t f_ffree
The total number of free inodes. fsfilcnt_t f_favail
The number of inodes that may actually be used. Some percentage of the inodes are reserved for the superuser, just as for blocks. unsigned long int f_fsid
The filesystem ID. POSIX doesn’t specify what this represents, and it’s not used under Linux. unsigned long int f_flag
Flags giving information about the filesystem. POSIX specifies two: ST_RDONLY, for a read-only filesystem (such as a CD-ROM), and ST_NOSUID, which disallows the use of the setuid and setgid permission bits on executables. GNU/Linux systems provide additional flags: They are listed in Table 8.2. unsigned long int f_namemax
The maximum length of a filename. This refers to each individual component in a pathname; in other words, the maximum length for a directory entry.
8.3 Retrieving Per-Filesystem Information
247
TABLE 8.2 GLIBC values for f_flag
Flag
POSIX Meaning
ST_MANDLOCK
Enforce mandatory locking (see Section 14.2, page 531).
ST_NOATIME
Don’t update the access time field on each access.
ST_NODEV
Disallow access through device files.
ST_NODIRATIME
Don’t update the access time field of directories.
ST_NOEXEC
Disallow execution of binaries.
ST_NOSUID
✓
Filesystem disallows the use of setuid and setgid bits.
ST_RDONLY
✓
Filesystem is read-only.
ST_SYNCHRONOUS
All writes are synchronous (see Section 4.6.3, page 110).
The fsblkcnt_t and fsfilcnt_t types are defined in . They are typically unsigned long, but on modern systems, they may be even be a 64-bit type, since disks have gotten very large. The following program, ch08-statvfs.c, shows how to use statvfs(): 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
/* ch08-statvfs.c --- demonstrate statvfs */ /* NOTE: GNU/Linux specific! */ #include #include #include #include #include #include
/* for getmntent(), et al. */ /* for getopt() */
void process(const char *filename); void do_statvfs(const struct mntent *fs); int errors = 0; char *myname; /* main --- process options */
Chapter 8 • Filesystems and Directory Walks
248
20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59
int main(int argc, char **argv) { int c; char *file = "/etc/mtab"; /* default file to read */ myname = argv[0]; while ((c = getopt(argc, argv, "f:")) != -1) { switch (c) { case 'f': file = optarg; break; default: fprintf(stderr, "usage: %s [-f fstab-file]\n", argv[0]); exit(1); } } process(file); return (errors != 0); } /* process --- read struct mntent structures from file */ void process(const char *filename) { FILE *fp; struct mntent *fs; fp = setmntent(filename, "r"); /* read only */ if (fp == NULL) { fprintf(stderr, "%s: %s: could not open: %s\n", myname, filename, strerror(errno)); exit(1); } while ((fs = getmntent(fp)) != NULL) do_statvfs(fs); endmntent(fp); }
Lines 1–59 are essentially the same as ch08-mounted.c. main() handles the command line, and process() loops over each mounted filesystem. do_statvfs() does the real work, printing the struct statvfs for each interesting filesystem.
8.3 Retrieving Per-Filesystem Information
61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100
249
/* do_statvfs --- Use statvfs and print info */ void do_statvfs(const struct mntent *fs) { struct statvfs vfs; if (fs->mnt_fsname[0] != '/') return;
/* skip nonreal filesystems */
if (statvfs(fs->mnt_dir, & vfs) != 0) { fprintf(stderr, "%s: %s: statvfs failed: %s\n", myname, fs->mnt_dir, strerror(errno)); errors++; return; } printf("%s, mounted on %s:\n", fs->mnt_dir, fs->mnt_fsname); printf("\tf_bsize: %ld\n", (long) vfs.f_bsize); printf("\tf_frsize: %ld\n", (long) vfs.f_frsize); printf("\tf_blocks: %lu\n", (unsigned long) vfs.f_blocks); printf("\tf_bfree: %lu\n", (unsigned long) vfs.f_bfree); printf("\tf_bavail: %lu\n", (unsigned long) vfs.f_bavail); printf("\tf_files: %lu\n", (unsigned long) vfs.f_files); printf("\tf_ffree: %lu\n", (unsigned long) vfs.f_ffree); printf("\tf_favail: %lu\n", (unsigned long) vfs.f_favail); printf("\tf_fsid: %#lx\n", (unsigned long) vfs.f_fsid); printf("\tf_flag: "); if (vfs.f_flag == 0) printf("(none)\n"); else { if ((vfs.f_flag & ST_RDONLY) != 0) printf("ST_RDONLY "); if ((vfs.f_flag & ST_NOSUID) != 0) printf("ST_NOSUID"); printf("\n"); } printf("\tf_namemax: %#ld\n", (long) vfs.f_namemax); }
Lines 67–68 skip filesystems that are not based on a real disk device. This means that filesystems like /proc or /dev/pts are ignored. (Admittedly, this check is a heuristic, but it works: In /etc/mtab mounted devices are listed by the full device pathname: for example, /dev/hda1.) Line 70 calls statvfs() with appropriate error checking, and lines 77–99 print the information. Lines 89–96 deal with flags: single bits of information that are or are not present. See the sidebar for a discussion of how flag bits are used in C code. Here is the output of ch08-statvfs:
Chapter 8 • Filesystems and Directory Walks
250
$ ch08-statvfs /, mounted on /dev/hda2: f_bsize: 4096 f_frsize: 4096 f_blocks: 1549609 f_bfree: 316663 f_bavail: 237945 f_files: 788704 f_ffree: 555482 f_favail: 555482 f_fsid: 0 f_flag: (none) f_namemax: 255 ... /win, mounted on /dev/hda1: f_bsize: 4096 f_frsize: 4096 f_blocks: 2092383 f_bfree: 1391952 f_bavail: 1391952 f_files: 0 f_ffree: 0 f_favail: 0 f_fsid: 0 f_flag: ST_NOSUID f_namemax: 260
Run the program Results for ext2 filesystem
Results for vfat filesystem
As of this writing, for GLIBC 2.3.2 and earlier, GNU df doesn’t use statvfs(). This is because the code reads /etc/mtab, and calls stat() for each mounted filesystem, to find the one on which the device number matches that of the file (or file descriptor) argument. It needs to find the filesystem in order to read the mount options so that it can set the f_flag bits. The problem is that stat() on a remotely mounted filesystem whose server is not available can hang indefinitely, thus causing df to hang as well. This problem has since been fixed in GLIBC, but df won’t change for a while so that it can continue to work on older systems. NOTE Although POSIX specifies statvfs() and fstatvfs(), not all systems support them or support them correctly. Many systems (including Linux, as described shortly), have their own system calls that provide similar information. GNU df uses a library routine to acquire filesystem information; the source file for that routine is full of #ifdefs for a plethora of different systems. With time, the portability situation should improve.
8.3 Retrieving Per-Filesystem Information
251
Bit Flags A common technique, applicable in many cases, is to have a set of flag values; when a flag is set (that is, true), a certain fact is true or a certain condition applies. Flag values are defined with either #defined symbolic constants or enums. In this chapter, the nftw() API (described later) also uses flags. There are only two flags for the struct statvfs field f_flag: #define ST_RDONLY #define ST_NOSUID
1 2
/* read-only filesystem */ /* setuid/setgid not allowed */
Sample definitions
Physically, each symbolic constant represents a different bit position within the f_flag value. Logically, each value represents a separate bit of state information; that is, some fact or condition that is or isn’t true for this particular instance of a struct statvfs. Flags are set, tested, and cleared with the C bitwise operators. For example, statvfs() would set these flags, using the bitwise OR operator: int statvfs(const char *path, struct statvfs *vfs) {
… fill in most of *vfs … vfs->f_flag = 0; if (filesystem is read-only) vfs->f_flag |= ST_RDONLY; if (filesystem disallows setuid) vfs->f_flag |= ST_NOSUID;
Make sure it starts out as zero Add the ST_RDONLY flag Add the ST_NOSUID flag
… rest of routine … }
The bitwise AND operator tests whether a flag is set, and a combination of the bitwise AND and COMPLEMENT operators clears a flag: if ((vfs.f_flag & ST_RDONLY) != 0)
True if ST_RDONLY flag is set
vfs.f_flag &= ~(ST_RDONLY|ST_NOSUID);
Clear both flags
The bitwise operators are daunting if you’ve not used them before. However, the example code just shown represents common C idioms. Study each operation carefully; perhaps draw yourself a few pictures showing how these operators work. Once you understand them, you can train yourself to recognize these operators as high-level operations for managing flag values instead of treating them as low-level bit manipulations. The reason to use flag values is that they provide considerable savings in data space. The single unsigned long field lets you store at least 32 separate bits of information. GLIBC (as of this writing) defines 11 different flags for the f_flag field.* If you used a separate char field for each flag, that would use 11 bytes instead of just the four used by the unsigned long. If you had 32 flags, that would be 32 bytes instead of just four! *
See /usr/include/bits/statvfs.h on a GNU/Linux system.
Chapter 8 • Filesystems and Directory Walks
252
8.3.2 Linux Style: statfs() and fstatfs() The statfs() and fstatfs() system calls are Linux specific. Their declarations are as follows: GLIBC
#include #include int statfs(const char *path, struct statfs *buf); int fstatfs(int fd, struct statfs *buf);
As with statvfs() and fstatvfs(), the two versions work on a filename or an open file descriptor, respectively. The struct statfs looks like this: struct statfs { long f_type; long f_bsize; long f_blocks; long f_bfree; long f_bavail; long f_files; long f_ffree; fsid_t f_fsid; long f_namelen; long f_spare[6]; };
/* /* /* /* /* /* /* /* /* /*
type of filesystem */ optimal transfer block size */ total data blocks in filesystem */ free blocks in fs */ free blocks avail to nonsuperuser */ total file nodes in file system */ free file nodes in fs */ filesystem id */ maximum length of filenames */ spare for later */
The fields are analogous to those in the struct statvfs. At least through GLIBC 2.3.2, the POSIX statvfs() and fstatvfs() functions are wrappers around statfs() and fstatfs(), respectively, copying the values from one kind of struct to the other. The advantage to using statfs() or fstatfs() is that they are system calls. The kernel returns the information directly. Since there is no f_flag field with mount options, it’s not necessary to look at every mounted filesystem to find the right one. (In other words, in order to fill in the mount options, statvfs() must examine each mounted filesystem to find the one containing the file named by path or fd. statfs() doesn’t need to do that, since it doesn’t provide information about the mount options.) There are two disadvantages to using these calls. First, they are Linux specific. Second, some of the information in the struct statvfs isn’t in the struct statfs; most noticeably, the mount flags (f_flag) and the number of available inodes (f_favail). (Thus, the Linux statvfs() has to find mount options from other sources, such as /etc/mtab, and it “fakes” the information for the struct statvfs fields for which real information isn’t available.)
8.3 Retrieving Per-Filesystem Information
253
One field in the struct statfs deserves special note. This is the f_type field, which indicates the type of the filesystem. The value is the filesystem’s magic number, extracted from the superblock. The statfs(2) manpage provides a list of commonly used filesystems and their magic numbers, which we use in ch08-statfs.c. (Alas, there is no separate #include file.) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39
/* ch08-statfs.c --- demonstrate Linux statfs */ /* NOTE: GNU/Linux specific! */ #include #include #include #include #include #include
/* for getmntent(), et al. */ /* for getopt() */
/* Defines taken from statfs(2) man page: */ #define AFFS_SUPER_MAGIC 0xADFF #define EFS_SUPER_MAGIC 0x00414A53 #define EXT_SUPER_MAGIC 0x137D #define EXT2_OLD_SUPER_MAGIC 0xEF51 #define EXT2_SUPER_MAGIC 0xEF53 #define HPFS_SUPER_MAGIC 0xF995E849 #define ISOFS_SUPER_MAGIC 0x9660 #define MINIX_SUPER_MAGIC 0x137F /* orig. minix */ #define MINIX_SUPER_MAGIC2 0x138F /* 30-char minix */ #define MINIX2_SUPER_MAGIC 0x2468 /* minix V2 */ #define MINIX2_SUPER_MAGIC2 0x2478 /* minix V2, 30 char names */ #define MSDOS_SUPER_MAGIC 0x4d44 #define NCP_SUPER_MAGIC 0x564c #define NFS_SUPER_MAGIC 0x6969 #define PROC_SUPER_MAGIC 0x9fa0 #define SMB_SUPER_MAGIC 0x517B #define XENIX_SUPER_MAGIC 0x012FF7B4 #define SYSV4_SUPER_MAGIC 0x012FF7B5 #define SYSV2_SUPER_MAGIC 0x012FF7B6 #define COH_SUPER_MAGIC 0x012FF7B7 #define UFS_MAGIC 0x00011954 #define XFS_SUPER_MAGIC 0x58465342 #define _XIAFS_SUPER_MAGIC 0x012FD16D void process(const char *filename); void do_statfs(const struct mntent *fs);
Chapter 8 • Filesystems and Directory Walks
254
40 41 42
int errors = 0; char *myname;
… main() is unchanged, process() is almost identical … 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128
/* type2str --- convert fs type to printable string, from statfs(2) */ const char *type2str(long type) { static struct fsname { long type; const char *name; } table[] = { { AFFS_SUPER_MAGIC, "AFFS" }, { COH_SUPER_MAGIC, "COH" }, { EXT2_OLD_SUPER_MAGIC, "OLD EXT2" }, { EXT2_SUPER_MAGIC, "EXT2" }, { HPFS_SUPER_MAGIC, "HPFS" }, { ISOFS_SUPER_MAGIC, "ISOFS" }, { MINIX2_SUPER_MAGIC, "MINIX V2" }, { MINIX2_SUPER_MAGIC2, "MINIX V2 30 char" }, { MINIX_SUPER_MAGIC, "MINIX" }, { MINIX_SUPER_MAGIC2, "MINIX 30 char" }, { MSDOS_SUPER_MAGIC, "MSDOS" }, { NCP_SUPER_MAGIC, "NCP" }, { NFS_SUPER_MAGIC, "NFS" }, { PROC_SUPER_MAGIC, "PROC" }, { SMB_SUPER_MAGIC, "SMB" }, { SYSV2_SUPER_MAGIC, "SYSV2" }, { SYSV4_SUPER_MAGIC, "SYSV4" }, { UFS_MAGIC, "UFS" }, { XENIX_SUPER_MAGIC, "XENIX" }, { _XIAFS_SUPER_MAGIC, "XIAFS" }, { 0, NULL }, }; static char unknown[100]; int i; for (i = 0; table[i].type != 0; i++) if (table[i].type == type) return table[i].name; sprintf(unknown, "unknown type: %#x", type); return unknown; } /* do_statfs --- Use statfs and print info */
8.3 Retrieving Per-Filesystem Information
129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153
255
void do_statfs(const struct mntent *fs) { struct statfs vfs; if (fs->mnt_fsname[0] != '/') return;
/* skip nonreal filesystems */
if (statfs(fs->mnt_dir, & vfs) != 0) { fprintf(stderr, "%s: %s: statfs failed: %s\n", myname, fs->mnt_dir, strerror(errno)); errors++; return; } printf("%s, mounted on %s:\n", fs->mnt_dir, fs->mnt_fsname); printf("\tf_type: %s\n", type2str(vfs.f_type)); printf("\tf_bsize: %ld\n", vfs.f_bsize); printf("\tf_blocks: %ld\n", vfs.f_blocks); printf("\tf_bfree: %ld\n", vfs.f_bfree); printf("\tf_bavail: %ld\n", vfs.f_bavail); printf("\tf_files: %ld\n", vfs.f_files); printf("\tf_ffree: %ld\n", vfs.f_ffree); printf("\tf_namelen: %ld\n", vfs.f_namelen); }
To save space, we’ve omitted main(), which is unchanged from the other programs presented earlier, and we’ve also omitted process(), which now calls do_statfs() instead of do_statvfs(). Lines 13–35 contain the list of filesystem magic numbers from the statfs(2) manpage. Although the numbers could be retrieved from kernel source code header files, such retrieval is painful (we tried), and the presentation here is easier to follow. Lines 86–125 define type2str(), which converts the magic number to a printable string. It does a simple linear search on a table of (value, string) pairs. In the (unlikely) event that the magic number isn’t in the table, types2str() creates an “unknown type” message and returns that (lines 123–124). do_statfs() (lines 129–153) prints the information from the struct statfs. The f_fsid member is omitted since fsid_t is an opaque type. The code is straightforward; line 145 uses type2str() to print the filesystem type. As for the similar program using statvfs(), this function ignores filesystems that aren’t on local devices (lines 133–134). Here is the output on our system:
Chapter 8 • Filesystems and Directory Walks
256
$ ch08-statfs /, mounted on /dev/hda2: f_type: EXT2 f_bsize: 4096 f_blocks: 1549609 f_bfree: 316664 f_bavail: 237946 f_files: 788704 f_ffree: 555483 f_namelen: 255 ... /win, mounted on /dev/hda1: f_type: MSDOS f_bsize: 4096 f_blocks: 2092383 f_bfree: 1391952 f_bavail: 1391952 f_files: 0 f_ffree: 0 f_namelen: 260
Run the program Results for ext2 filesystem
Results for vfat filesystem
In conclusion, whether to use statvfs() or statfs() in your own code depends on your requirements. As described in the previous section, GNU df doesn’t use statvfs() under GNU/Linux and in general tends to use each Unix system’s unique “get filesystem info” system call. Although this works, it isn’t pretty. On the other hand, sometimes you have no choice: for example, the GLIBC problems we mentioned above. In this case, there is no perfect solution.
8.4 Moving Around in the File Hierarchy Several system calls and standard library functions let you change your current directory and determine the full pathname of the current directory. More complicated functions let you perform arbitrary actions for every filesystem object in a directory hierarchy.
8.4.1 Changing Directory: chdir() and fchdir() In Section 1.2, “The Linux/Unix Process Model,” page 10, we said: The current working directory is the one to which relative pathnames (those that don’t start with a /) are relative. This is the directory you are “in” whenever you issue a ‘cd someplace’ command to the shell.
8.4 Moving Around in the File Hierarchy
257
Each process has a current working directory. Each new process inherits its current directory from the process that started it (its parent). Two functions let you change to another directory: #include
POSIX XSI
int chdir(const char *path); int fchdir(int fd);
The chdir() function takes a string naming a directory, whereas fchdir() expects a file descriptor that was opened on a directory with open().6 Both return 0 on success and -1 on error (with errno set appropriately). Typically, if open() on a directory succeeded, then fchdir() will also succeed, unless someone changed the permissions on the directory between the calls. (fchdir() is a relatively new function; older Unix systems won’t have it.) These functions are almost trivial to use. The following program, ch08-chdir.c, demonstrates both functions. It also demonstrates that fchdir() can fail if the permissions on the open directory don’t include search (execute) permission: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
/* ch08-chdir.c --- demonstrate chdir() and fchdir(). Error checking omitted for brevity */ #include #include #include #include #include
int main(void) { int fd; struct stat sbuf; fd = open(".", O_RDONLY); fstat(fd, & sbuf); chdir(".."); fchmod(fd, 0);
/* /* /* /*
open directory for reading */ obtain info, need original permissions */ `cd ..' */ zap permissions on original directory */
if (fchdir(fd) < 0) perror("fchdir back");
/* try to `cd' back, should fail */
6 On GNU/Linux and BSD systems, you can apply the dirfd() function to a DIR * pointer to obtain the
underlying file descriptor; see the GNU/Linux dirfd(3) manpage.
Chapter 8 • Filesystems and Directory Walks
258
23 24 25 26 27
fchmod(fd, sbuf.st_mode & 07777); /* restore original permissions */ close(fd); /* all done */ return 0; }
Line 15 opens the current directory. Line 16 calls fstat() on the open directory so that we have a copy of its permissions. Line 17 uses chdir() to move up a level in the file hierarchy. Line 18 does the dirty work, turning off all permissions on the original directory. Lines 20–21 attempt to change back to the original directory. It is expected to fail, since the current permissions don’t allow it. Line 23 restores the original permissions. The ‘sbuf.st_mode & 07777’ retrieves the low-order 12 permission bits; these are the regular 9 rwxrwxrwx bits, and the setuid, setgid, and “sticky” bits, which we discuss in Chapter 11, “Permissions and User and Group ID Numbers,” page 403. Finally, line 24 cleans up by closing the open file descriptor. Here’s what happens when the program runs: $ ls -ld . drwxr-xr-x 2 arnold devel $ ch08-chdir fchdir back: Permission denied $ ls -ld . drwxr-xr-x 2 arnold devel
Show current permissions 4096 Sep
4096 Sep
9 16:42 .
9 16:42 .
Run the program Fails as expected Look at permissions again Everything is back as it was
8.4.2 Getting the Current Directory: getcwd() The aptly named getcwd() function retrieves the absolute pathname of the current working directory: #include
POSIX
char *getcwd(char *buf, size_t size);
The function fills in buf with the pathname; it expects buf to have size bytes. Upon success, it returns its first argument. Otherwise, if it needs more than size bytes, it returns NULL and sets errno to ERANGE. The intent is that if ERANGE happens, you should try to allocate a larger buffer (with malloc() or realloc()) and try again. If any of the directory components leading to the current directory are not readable or searchable, then getcwd() can fail and errno will be EACCES. The following simple program demonstrates its use:
8.4 Moving Around in the File Hierarchy
259
/* ch08-getcwd.c --- demonstrate getcwd(). Error checking omitted for brevity */ #include #include #include #include #include
int main(void) { char buf[PATH_MAX]; char *cp; cp = getcwd(buf, sizeof(buf)); printf("Current dir: %s\n", buf); printf("Changing to ..\n"); chdir(".."); /* `cd ..' */ cp = getcwd(buf, sizeof(buf)); printf("Current dir is now: %s\n", buf); return 0; }
This simple program prints the current directory, changes to the parent directory, and then prints the new current directory. (cp isn’t really needed here, but in a real program it would be used for error checking.) When run, it produces the following output: $ ch08-getcwd Current dir: /home/arnold/work/prenhall/progex/code/ch08 Changing to .. Current dir is now: /home/arnold/work/prenhall/progex/code
Formally, if the buf argument is NULL, the behavior of getcwd() is undefined. In this case, the GLIBC version of getcwd() will call malloc() for you, allocating a buffer of size size. Going even further out of its way to be helpful, if size is 0, then the buffer it allocates will be “big enough” to hold the returned pathname. In either case, you should call free() on the returned pointer when you’re done with the buffer. The GLIBC behavior is helpful, but it’s not portable. For code that has to work across platforms, you can write a replacement function that provides the same functionality while having your replacement function call getcwd() directly if on a GLIBC system.
Chapter 8 • Filesystems and Directory Walks
260
GNU/Linux systems provide the file /proc/self/cwd. This file is a symbolic link to the current directory: $ cd /tmp $ ls -l /proc/self/cwd lrwxrwxrwx 1 arnold $ cd $ ls -l /proc/self/cwd lrwxrwxrwx 1 arnold
Change directory someplace Look at the file devel
0 Sep
9 17:29 /proc/self/cwd -> /tmp
Change to home directory Look at it again devel
0 Sep
9 17:30 /proc/self/cwd -> /home/arnold
This is convenient at the shell level but presents a problem at the programmatic level. In particular, the size of the file is zero! (This is because it’s a file in /proc, which the kernel fakes; it’s not a real file living on disk.) Why is the zero size a problem? If you remember from Section 5.4.5, “Working with Symbolic Links,” page 151, lstat() on a symbolic link returns the number of characters in the name of the linked-to file in the st_size field of the struct stat. This number can then be used to allocate a buffer of the appropriate size for use with readlink(). That won’t work here, since the size is zero. You have to use (or allocate) a buffer that you guess is big enough. However, since readlink() does not fill in any more characters than you provide, you can’t tell whether or not the buffer is big enough; readlink() does not fail when there isn’t enough room. (See the Coreutils xreadlink() function in Section 5.4.5, “Working with Symbolic Links,” page 151, which solves the problem.) In addition to getcwd(), GLIBC has several other nonportable routines. These save you the trouble of managing buffers and provide compatibility with older BSD systems. For the details, see getcwd(3).
8.4.3 Walking a Hierarchy: nftw() A common programming task is to process entire directory hierarchies: doing something for every file and every directory and subdirectory in an entire tree. Consider, for example, du, which prints disk usage information, ‘chown -R’, which recursively changes ownership, or the find program, which finds files matching certain criteria. At this point, you know enough to write your own code to manually open and read directories, call stat() (or lstat()) for each entry, and recursively process subdirectories. However, such code is challenging to get right; it’s possible to run out of file descriptors if you leave parent directories open while processing subdirectories; you have to decide whether to process symbolic links as themselves or as the files they point
8.4 Moving Around in the File Hierarchy
261
to; you have to be able to deal with directories that aren’t readable or searchable, and so on. It’s also painful to have to write the same code over and over again if you need it for multiple applications. 8.4.3.1 The nftw() Interface To obviate the problems, System V introduced the ftw() (“file tree walk”) function. ftw() did all the work to “walk” a file tree (hierarchy). You supplied it with a pointer to a function, and it called the function for every file object it encountered. Your function could then process each filesystem object as it saw fit. Over time, it became clear that the ftw() interface didn’t quite do the full job;7 for example, originally it didn’t support symbolic links. For this reason, nftw() (“new ftw()” [sic]) was added to the X/Open Portability Guide; it’s now part of POSIX. Here’s the prototype: XSI
#include int nftw(const char *dir, int (*fn)(const char *file, const struct stat *sb, int flag, struct FTW *s), int depth, int flags);
Starting point Function pointer to function of four arguments Max open fds, flags
And here are the arguments: const char *dir
A string naming the starting point of the hierarchy to process. int (*fn)(const char *file, const struct stat *sb, int flag, struct FTW *s)
A pointer to a function with the given arguments. This function is called for every object in the hierarchy. Details below. int depth
This argument is misnamed. To avoid running out of file descriptors, nftw() keeps no more than depth simultaneously open directories. This does not prevent
7 POSIX standardizes the ftw() interface to support existing code, and GNU/Linux and commercial Unix systems
continue to supply it. However, since it’s underpowered, we don’t otherwise discuss it. See ftw(3) if you’re interested.
Chapter 8 • Filesystems and Directory Walks
262
nftw() from processing hierarchies that are more than depth levels deep; but smaller values for depth mean that nftw() has to do more work. flags
A set of flags, bitwise OR’d, that direct how nftw() should process the hierarchy. The nftw() interface has two disjoint sets of flags. One set controls nftw() itself (the flags argument to nftw()). The other set is passed to the user-supplied function that nftw() calls (the flag argument to (*fn)()). However, the interface is confusing, because both sets of flags use names starting with the prefix ‘FTW_’. We’ll do our best to keep this clear as we go. Table 8.3 presents the flags that control nftw(). TABLE 8.3 Control flags for nftw()
Flag
Meaning
FTW_CHDIR When set, change to each directory before opening it. This action is more
efficient, but the calling application has to be prepared to be in a different directory when nftw() is done. FTW_DEPTH When set, do a “depth-first search.” This means that all of the files and subdi-
rectories in a directory are processed before the directory itself is processed. FTW_MOUNT When set, stay within the same mounted filesystem. This is a more specialized
option. FTW_PHYS
When set, do not follow symbolic links.
provides greater efficiency; when processing deep file hierarchies, the kernel doesn’t have to process full pathnames over and over again when doing a stat() or opening a directory. The time savings on large hierarchies can be quite noticeable.8 FTW_CHDIR
FTW_DEPTH may or may not be what you need; for some applications it’s just right. Consider ‘chmod -R u-rx .’. This removes read and execute permission for the
owner of all files and subdirectories in the current directory. If this permission change is applied to a directory before it’s applied to the directory’s contents, any subsequent attempt to process the contents will fail! Thus, it should be applied after the contents
8 Some older GLIBC versions have problems with FTW_CHDIR. This is not true for GLIBC 2.3.2 and later, and
it’s unlikely that you’ll encounter problems.
8.4 Moving Around in the File Hierarchy
263
have been processed.9 The GNU/Linux nftw(3) manpage notes for FTW_PHYS that “this is what you want.” This lets you process symbolic links as themselves, which is usually what’s necessary. (Consider du; it should count the link’s space separately from that of the linked-to file.) 8.4.3.2 The nftw() Callback Function As nftw() runs, it calls a function to which you supply a pointer. (Such functions are termed callback functions since they are “called back” from library code.) The callback function receives four arguments: const char *file
The name of the current file (directory, symbolic link, etc.) being processed. const struct stat *sb A pointer to a struct stat
for the file.
int flag
One of several flag values (described below) indicating what kind of file this is or whether an error was encountered for the object. struct FTW *s
This structure provides two separate pieces of information: struct FTW { int base; int level; };
/* Index in file of base part of filename */ /* Depth of this item relative to starting point */
The flag parameter has one of the values listed in Table 8.4. The struct FTW *s provides additional information that can be useful. s->base acts as an index into file; file is the full pathname of the object being processed (relative to the starting point). ‘file + s->base’ points to the first character of the filename component of the file.
9 Why anyone would want to make such a change, we don’t know, but the “you asked for it, you got it” philosophy
applies here too!
Chapter 8 • Filesystems and Directory Walks
264
TABLE 8.4 Flag values for nftw() callback function
Flag
Meaning
FTW_F
Object is a regular file.
FTW_D
Object is a directory.
FTW_DNR
Object is a directory that wasn’t readable.
FTW_SL
Object is a symbolic link.
FTW_NS
Object is not a symbolic link, and stat() failed.
FTW_DP
Object is a directory whose children have already been processed. This can only happen if FTW_DEPTH was used in the call to nftw().
FTW_SLN
Object is a symbolic link pointing to a nonexistent file. This can only happen if FTW_PHYS was not used in the call to nftw().
s->level indicates the current depth in the hierarchy; the original starting point is considered to be at level 0.
The callback function should return 0 if all is well. Any nonzero return causes nftw() to stop its processing and to return the same nonzero value. The manpage notes that the callback function should stop processing only by using its return value so that nftw() has a chance to clean up: that is, free any dynamic storage, close open file descriptors, and so on. The callback function should not use longjmp() unless the program will immediately exit. (longjmp() is an advanced function, which we describe in Section 12.5, “Nonlocal Gotos,” page 446.) The recommended technique for handling errors is to set a global variable indicating that there were problems, return 0 from the callback, and deal with the failures once nftw() has completed traversing the file hierarchy. (GNU du does this, as we see shortly.) Let’s tie all this together with an example program. ch08-nftw.c processes each file or directory named on the command line, running nftw() on it. The function that processes each file prints the filename and type with indentation, showing the hierarchical position of each file. For a change, we show the results first, and then we show and discuss the program:
8.4 Moving Around in the File Hierarchy
$ pwd /home/arnold/work/prenhall/progex $ code/ch08/ch08-nftw code code (directory) ch02 (directory) ch02-printenv.c (file) ch03 (directory) ch03-memaddr.c (file) ch04 (directory) ch04-holes.c (file) ch04-cat.c (file) ch04-maxfds.c (file) v7cat.c (file) ...
265
Where we are Walk the ‘code’ directory Top-level directory Subdirectories one level indented Files in subdirs two levels indented
Here’s the program itself: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34
/* ch08-nftw.c --- demonstrate nftw() */ #define _XOPEN_SOURCE 1 #define _XOPEN_SOURCE_EXTENDED 1 #include #include #include #include #include #include
#define SPARE_FDS 5
/* Required under GLIBC for nftw() */ /* Same */
/* gets and for us */ /* for PATH_MAX */ /* for getdtablesize(), getcwd() declarations */ /* fds for use by other functions, see text */
extern int process(const char *file, const struct stat *sb, int flag, struct FTW *s); /* usage --- print message and die */ void usage(const char *name) { fprintf(stderr, "usage: %s [-c] directory ...\n", name); exit(1); } /* main --- call nftw() on each command-line argument */ int main(int argc, char **argv) { int i, c, nfds; int errors = 0; int flags = FTW_PHYS; char start[PATH_MAX], finish[PATH_MAX];
Chapter 8 • Filesystems and Directory Walks
266
35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67
while ((c = getopt(argc, argv, "c")) != -1) { switch (c) { case 'c': flags |= FTW_CHDIR; break; default: usage(argv[0]); break; } } if (optind == argc) usage(argv[0]); getcwd(start, sizeof start); nfds = getdtablesize() - SPARE_FDS; /* leave some spare descriptors */ for (i = optind; i < argc; i++) { if (nftw(argv[i], process, nfds, flags) != 0) { fprintf(stderr, "%s: %s: stopped early\n", argv[0], argv[i]); errors++; } } if ((flags & FTW_CHDIR) != 0) { getcwd(finish, sizeof finish); printf("Starting dir: %s\n", start); printf("Finishing dir: %s\n", finish); } return (errors != 0); }
Lines 3–11 include header files. Through at least GLIBC 2.3.2, the #defines for _XOPEN_SOURCE and _XOPEN_SOURCE_EXTENDED are necessary before any header file inclusion. They make it possible to get the declarations and flag values that nftw() provides over and above those of ftw(). This is specific to GLIBC. The need for it will eventually disappear as GLIBC becomes fully compliant with the 2001 POSIX standard. Lines 35–44 process options. The -c option adds the FTW_CHDIR flag to the nftw() flags. This is an experiment to see if you can end up somewhere different from where you started. It seems that if nftw() fails, you can; otherwise, you end up back where you were. (POSIX doesn’t document this explicitly, but the intent seems to be that you do end up back where you started. The standard does say that the callback function should not change the current directory.) Line 49 saves the starting directory for later use, using getcwd().
8.4 Moving Around in the File Hierarchy
267
Line 51 computes the number of file descriptors nftw() can use. We don’t want it to use all available file descriptors in case the callback function wants to open files too. The computation uses getdtablesize() (see Section 4.4.1, “Understanding File Descriptors,” page 92) to retrieve the maximum available number and subtracts SPARE_FDS, which was defined earlier, on line 13. This procedure warrants more explanation. In the normal case, at least three descriptors are already used for standard input, standard output, and standard error. nftw() needs some number of file descriptors for opening and reading directories; under the hood, opendir() uses open() to open a directory for reading. If the callback function also needs to open files, we have to prevent nftw() from using up all available file descriptors with open directories. We do this by subtracting some number from the maximum available. For this example, we chose five, but if the callback function needs to open files, a larger number should be used. (nftw() knows how to recover when it runs out of file descriptors; we don’t have to worry about that case.) Lines 52–58 are the main loop over the arguments; lines 53–57 check for errors; when they occur, the code prints a diagnostic and increments the errors variable. Lines 60–64 are part of the experiment for FTW_CHDIR, printing the starting and finishing directories if -c was used. The function of real interest is process(); this is the callback function that processes each file. It uses the basic template for an nftw() callback function, which is a switch statement on the flag value: 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85
/* process --- print out each file at the right level */ int process(const char *file, const struct stat *sb, int flag, struct FTW *s) { int retval = 0; const char *name = file + s->base; printf("%*s", s->level * 4, "");
/* indent over */
switch (flag) { case FTW_F: printf("%s (file)\n", name); break; case FTW_D: printf("%s (directory)\n", name); break;
Chapter 8 • Filesystems and Directory Walks
268
86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
case FTW_DNR: printf("%s (unreadable directory)\n", name); break; case FTW_SL: printf("%s (symbolic link)\n", name); break; case FTW_NS: printf("%s (stat failed): %s\n", name, strerror(errno)); break; case FTW_DP: case FTW_SLN: printf("%s: FTW_DP or FTW_SLN: can't happen!\n", name); retval = 1; break; default: printf("%s: unknown flag %d: can't happen!\n", name, flag); retval = 1; break; } return retval; }
Line 75 uses ‘file + s->base’ to get at the name part of the full pathname. This pointer value is saved in the name variable for reuse throughout the function. Line 77 produces the right amount of indentation, using a nice trick. Using %*s, printf() takes the field width from the first argument. This is computed dynamically as ‘level * 4’. The string to be printed is "", the null string. The end result is that printf() produces the right amount of space for us, without our having to run a loop. Lines 79–104 are the switch statement. In this case, it doesn’t do anything terribly interesting except print the file’s name and its type (file, directory, etc.). Although this program doesn’t use the struct stat, it should be clear that you could do anything you need to in the callback function. NOTE
Jim Meyering, the maintainer of the GNU Coreutils, notes that the
nftw() design isn’t perfect, because of its recursive nature. (It calls itself
recursively when processing subdirectories.) If a directory hierarchy gets really deep, in the 20,000–40,000 level range (!), nftw() can run out of stack space, killing the program. There are other problems related to nftw()’s design as well. The post-5.0 version of the GNU Coreutils fixes this by using the BSD fts() suite of routines (see fts(3)).
8.5 Walking a File Tree: GNU du
269
8.5 Walking a File Tree: GNU du The GNU version of du in the GNU Coreutils uses nftw() to traverse one or more file hierarchies, gathering and producing statistics concerning the amount of disk space used. It has a large number of options that control its behavior with respect to symbolic links, output format of numbers, and so on. This makes the code harder to decipher than a simpler version would be. (However, we’re not going to let that stop us.) Here is a summary of du’s options, which will be helpful shortly when we look at the code: $ du --help Usage: du [OPTION]... [FILE]... Summarize disk usage of each FILE, recursively for directories. Mandatory arguments to long options are mandatory for short options too. -a, --all write counts for all files, not just directories --apparent-size print apparent sizes, rather than disk usage; although the apparent size is usually smaller, it may be larger due to holes in (`sparse') files, internal fragmentation, indirect blocks, and the like -B, --block-size=SIZE use SIZE-byte blocks -b, --bytes equivalent to `--apparent-size --block-size=1' -c, --total produce a grand total -D, --dereference-args dereference FILEs that are symbolic links -h, --human-readable print sizes in human readable format (e.g., 1K 234M 2G) -H, --si likewise, but use powers of 1000 not 1024 -k like --block-size=1K -l, --count-links count sizes many times if hard linked -L, --dereference dereference all symbolic links -S, --separate-dirs do not include size of subdirectories -s, --summarize display only a total for each argument -x, --one-file-system skip directories on different filesystems -X FILE, --exclude-from=FILE Exclude files that match any pattern in FILE. --exclude=PATTERN Exclude files that match PATTERN. --max-depth=N print the total for a directory (or file, with --all) only if it is N or fewer levels below the command line argument; --max-depth=0 is the same as --summarize --help display this help and exit --version output version information and exit SIZE may be (or may be an integer optionally followed by) one of following: kB 1000, K 1024, MB 1,000,000, M 1,048,576, and so on for G, T, P, E, Z, Y. Report bugs to .
Chapter 8 • Filesystems and Directory Walks
270
To complicate matters further, du uses a private version of nftw() that has some extensions. First, there are additional flag values for the callback function: FTW_DCHP
This value signifies that nftw() could not execute ‘chdir("..")’. FTW_DCH
This value signifies that nftw() could not use chdir() to change into a directory itself. FTW_DPRE
The private nftw() calls the callback function for directories, twice. This value is used the first time a directory is encountered. The standard FTW_DP value is used after all the directory’s children have been processed. The private nftw() also adds a new member, int skip, to the struct FTW. If the current object is a directory and the callback function sets the skip field to nonzero, nftw() will not process that directory any further. (The callback function should set skip this way when the flag is FTW_DPRE; doing it for FTW_DP is too late.) With that explanation under our belt, here is the process_file() function from du.c. Line numbers are relative to the start of the function: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
/* This function is called once for every file system object that nftw encounters. nftw does a depth-first traversal. This function knows that and accumulates per-directory totals based on changes in the depth of the current entry. */ static int process_file (const char *file, const struct stat *sb, int file_type, struct FTW *info) { uintmax_t size; uintmax_t size_to_print; static int first_call = 1; static size_t prev_level; static size_t n_alloc; static uintmax_t *sum_ent; static uintmax_t *sum_subdir; int print = 1; /* Always define info->skip before returning. */ info->skip = excluded_filename (exclude, file + info->base);
For --exclude
8.5 Walking a File Tree: GNU du
271
This function does a lot since it has to implement all of du’s options. Line 17 sets print to true (1); the default is to print information about each file. Later code sets it to false (0) if necessary. Line 20 sets info->skip based on the --exclude option. Note that this excludes subdirectories if a directory matches the pattern for --exclude. 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56
switch (file_type) { case FTW_NS: error (0, errno, _("cannot access %s"), quote (file)); G_fail = 1; Set global var for later return 0; Return 0 to keep going case FTW_DCHP: error (0, errno, _("cannot change to parent of directory %s"), quote (file)); G_fail = 1; return 0; case FTW_DCH: /* Don't return just yet, since although nftw couldn't chdir into the directory, it was able to stat it, so we do have a size. */ error (0, errno, _("cannot change to directory %s"), quote (file)); G_fail = 1; break; case FTW_DNR: /* Don't return just yet, since although nftw couldn't read the directory, it was able to stat it, so we do have a size. */ error (0, errno, _("cannot read directory %s"), quote (file)); G_fail = 1; break; default: break; } /* If this is the first (pre-order) encounter with a directory, return right away. */ if (file_type == FTW_DPRE) return 0;
Chapter 8 • Filesystems and Directory Walks
272
Lines 22–51 are the standard switch statement. Errors for which there’s no size information set the global variable G_fail to 1 and return 0 to keep going (see lines 24–27 and 29–33). Errors for which there is a size also set G_fail but then break out of the switch in order to handle the statistics (see lines 35–40 and 42–47). Lines 55–56 return early if this is the first time a directory is encountered. 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76
/* If the file is being excluded or if it has already been counted via a hard link, then don't let it contribute to the sums. */ if (info->skip || (!opt_count_all && 1 < sb->st_nlink && hash_ins (sb->st_ino, sb->st_dev))) { /* Note that we must not simply return here. We still have to update prev_level and maybe propagate some sums up the hierarchy. */ size = 0; print = 0; } else { size = (apparent_size ? sb->st_size : ST_NBLOCKS (*sb) * ST_NBLOCKSIZE); }
Now it starts to get interesting. By default, du counts the space used by hard-linked files just once. The --count-links option causes it to count each link’s space; the variable opt_count_all is true when --count-links is supplied. To keep track of links, du maintains a hash table10 of already seen (device, inode) pairs. Lines 60–63 test whether a file should not be counted, either because it was excluded (info->skip is true, line 60) or because --count-links was not supplied (line 61) and the file has multiple links (line 62) and the file is already in the hash table (line 63). In this case, the size is set to 0, so that it doesn’t add to the running totals, and print is also set to false (lines 68–69). If none of those conditions hold, the size is computed either according to the size in the struct stat or the number of disk blocks (lines 73–75). This decision is based on the apparent_size variable, which is set if the --apparent-size option is used.
10 A hash table is a data structure that allows quick retrieval of stored information; the details are beyond the scope
of this book.
8.5 Walking a File Tree: GNU du
78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99
273
if (first_call) { Allocate arrays n_alloc = info->level + 10; sum_ent = XCALLOC (uintmax_t, n_alloc); to hold sums sum_subdir = XCALLOC (uintmax_t, n_alloc); } else { /* FIXME: it's a shame that we need these `size_t' casts to avoid warnings from gcc about `comparison between signed and unsigned'. Probably unavoidable, assuming that the members of struct FTW are of type `int' (historical), since I want variables like n_alloc and prev_level to have types that make sense. */ if (n_alloc level) { Double amount n_alloc = info->level * 2; sum_ent = XREALLOC (sum_ent, uintmax_t, n_alloc); And reallocate sum_subdir = XREALLOC (sum_subdir, uintmax_t, n_alloc); } } size_to_print = size;
Lines 78–97 manage the dynamic memory used to hold file size statistics. first_call is a static variable (line 12) that is true the first time process_file() is called. In this case, calloc() is called (through a wrapper macro on lines 81–82; this was discussed in Section 3.2.1.8, “Example: Reading Arbitrarily Long Lines,” page 67). The rest of the time, first_call is false, and realloc() is used (again, through a wrapper macro, lines 91–96). Line 99 sets size_to_print to size; this variable may be updated depending on whether it has to include the sizes of any children. Although size could have been reused, the separate variable makes the code easier to read. 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116
if (! first_call) { if ((size_t) info->level == prev_level) { /* This is usually the most common case. Do nothing. */ } else if ((size_t) info->level > prev_level) { /* Descending the hierarchy. Clear the accumulators for *all* levels between prev_level and the current one. The depth may change dramatically, e.g., from 1 to 10. */ int i; for (i = prev_level + 1; i level; i++) sum_ent[i] = sum_subdir[i] = 0; }
Chapter 8 • Filesystems and Directory Walks
274
117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132
else /* info->level < prev_level */ { /* Ascending the hierarchy. nftw processes a directory only after all entries in that directory have been processed. When the depth decreases, propagate sums from the children (prev_level) to the parent. Here, the current level is always one smaller than the previous one. */ assert ((size_t) info->level == prev_level - 1); size_to_print += sum_ent[prev_level]; if (!opt_separate_dirs) size_to_print += sum_subdir[prev_level]; sum_subdir[info->level] += (sum_ent[prev_level] + sum_subdir[prev_level]); } }
Lines 101–132 compare the current level to the previous one. There are three possible cases. The levels are the same. In this case, there’s no need to worry about child statistics. (Lines 103–106.) The current level is higher than the previous level. In this case, we’ve gone down the hierarchy, and the statistics must be reset (lines 107–116). The term “accumulator” in the comment is apt: each element accumulates the total disk space used at that level. (In the early days of computing, CPU registers were often termed “accumulators.”) The current level is lower than the previous level. In this case, we’ve finished processing all the children in a directory and have just moved back up to the parent directory (lines 117–131). The code updates the totals, including size_to_print. 134 135 136 137 138 139 140 141 142 143 144 145 146 147
prev_level = info->level; first_call = 0;
Set static variables
/* Let the size of a directory entry contribute to the total for the containing directory, unless --separate-dirs (-S) is specified. */ if ( ! (opt_separate_dirs && IS_FTW_DIR_TYPE (file_type))) sum_ent[info->level] += size; /* Even if this directory is unreadable or we can't chdir into it, do let its size contribute to the total, ... */ tot_size += size; /* ... but don't print out a total for it, since without the size(s) of any potential entries, it could be very misleading. */
8.5 Walking a File Tree: GNU du
148 149 150 151 152 153 154 155
275
if (file_type == FTW_DNR || file_type == FTW_DCH) return 0; /* If we're not counting an entry, e.g., because it's a hard link to a file we've already counted (and --count-links), then don't print a line for it. */ if (!print) return 0;
Lines 134–135 set the static variables prev_level and first_call so that they’ll have the correct values for a subsequent call to process_file(), ensuring that all the previous code works correctly. Lines 137–144 adjust statistics on the basis of options and the file type. The comments and code are fairly straightforward. Lines 146–155 quit early if the information should not be printed. 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183
/* FIXME: This looks suspiciously like it could be simplified. */ if ((IS_FTW_DIR_TYPE (file_type) && (info->level level == 0)) || ((opt_all && info->level level == 0)) { print_only_size (size_to_print); fputc ('\t', stdout); if (arg_length) { /* Print the file name, but without the `.' or `/.' directory suffix that we may have added in main. */ /* Print everything before the part we appended. */ fwrite (file, arg_length, 1, stdout); /* Print everything after what we appended. */ fputs (file + arg_length + suffix_length + (file[arg_length + suffix_length] == '/'), stdout); } else { fputs (file, stdout); } fputc ('\n', stdout); fflush (stdout); } return 0; }
The condition on lines 158–160 is confusing, and the comment on line 157 notes this. The condition states: “If (1a) the file is a directory and (1b) the level is less than the maximum to print (the --max-depth and max_depth variable) or the level is zero, or (2a) all files should be printed and the level is less than the maximum to print, or
Chapter 8 • Filesystems and Directory Walks
276
(2b) the level is zero,” then print the file. (Yow! The post-5.0 version of du uses a slightly less complicated condition for this case.) Lines 162–179 do the printing. Lines 162–163 print the size and a TAB character. Lines 164–173 handle a special case. This is explained later on in du.c, on lines 524–529 of the file: 524 525 526 527 528 529
/* When dereferencing only command line arguments, we're using nftw's FTW_PHYS flag, so a symlink-to-directory specified on the command line wouldn't normally be dereferenced. To work around that, we incur the overhead of appending `/.' (or `.') now, and later removing it each time we output the name of a derived file or directory name. */
In this case, arg_length is true, so lines 164–173 have to print out the original name, not the modified one. Otherwise, lines 174–117 can print the name as it is. Whew! That’s a lot of code. We find this to be on the upper end of the complexity spectrum, at least as far as what can be easily presented in a book of this nature. However, it demonstrates that real-world code is often complex. The best way to manage such complexity is with clearly named variables and detailed comments. du.c is good in that respect; we were able to extract the code and examine it fairly easily, without having to show all 735 lines of the program!
8.6 Changing the Root Directory: chroot() The current working directory, set with chdir() (see Section 8.4.1, “Changing Directory: chdir() and fchdir(),” page 256), is an attribute of the process, just like the set of open files. It is also inherited by new processes. Less well known is that every process also has a current root directory. It is this directory to which the pathname / refers. Most of the time, a process’s root and the system root directories are identical. However, the superuser can change the root directory, with the (you guessed it) chroot() system call: #include
Common
int chroot(const char *path);
The return value is 0 upon success and -1 upon error. As the GNU/Linux chroot(2) manpage points out, changing the root directory does not change the current directory: Programs that must make sure that they stay underneath the new root directory must also execute chdir() afterwards:
8.7 Summary
277
if (chroot("/new/root") < 0) /* handle error */
Set new root directory
if (chdir("/some/dir") < 0) /* handle error */
Pathnames now relative to new root
The chroot() system call is used most often for daemons—background programs that must run in a special, contained environment. For example, consider an Internet FTP daemon that allows anonymous FTP (connection by anyone, from anywhere, without a regular username and password). Obviously, such a connection should not be able to see all the files on the whole system. Instead, the FTP daemon does a chroot() to a special directory with just enough structure to allow it to function. (For example, its own /bin/ls for listing files, its own copy of the C runtime library if it’s shared, and possibly its own copy of /etc/passwd and /etc/group to show a limited set of user and group names.) POSIX doesn’t standardize this system call, although GNU/Linux and all Unix systems support it. (It’s been around since V7.) It is specialized, but when you need it, it’s very handy.
8.7 Summary • Filesystems are collections of free, inode, metadata, and data blocks, organized in a specific fashion. Filesystems correspond one-to-one with the (physical or logical) partitions in which they are made. Each filesystem has its own root directory; by convention the root directory always has inode number 2. • The mount command mounts a filesystem, grafting it onto the logical hierarchical file namespace. The umount command detaches a filesystem. The kernel arranges for /. and /.. to be the same; the root directory of the entire namespace is its own parent. In all other cases, the kernel arranges for ‘..’ in the root of a mounted filesystem to point to the parent directory of the mount point. • Modern Unix systems support multiple types of filesystems. In particular, Sun’s Network File System (NFS) is universally supported, as is the ISO 9660 standard format for CD-ROMs, and MS-DOS FAT partitions are supported on all Unix systems that run on Intel x86 hardware. To our knowledge, Linux supports the largest number of different filesystems—well over 30! Many are specialized, but many others are for general use, including at least four different journaling filesystems.
Chapter 8 • Filesystems and Directory Walks
278
• The /etc/fstab file lists each system’s partitions, their mount points, and any relevant mount options. /etc/mtab lists those filesystems that are currently mounted, as does /proc/mounts on GNU/Linux systems. The loop option to mount is particularly useful under GNU/Linux for mounting filesystem images contained in regular files, such as CD-ROM images. Other options are useful for security and for mounting foreign filesystems, such as Windows vfat filesystems. • The /etc/fstab-format files can be read with the getmntent() suite of routines. The GNU/Linux format is shared with several other commercial Unix variants, most notably Sun’s Solaris. • The statvfs() and fstatvfs() functions are standardized by POSIX for retrieving filesystem information, such as the number of free and used disk blocks, the number of free and used inodes, and so on. Linux has its own system calls for retrieving similar information: statfs() and fstatfs(). • chdir() and fchdir() let a process change its current directory. getcwd() retrieves the absolute pathname of the current directory. These three functions are straightforward to use. • The nftw() function centralizes the task of “walking a file tree,” that is, visiting every filesystem object (file, device, symbolic link, directory) in an entire directory hierarchy. Different flags control its behavior. The programmer then has to provide a callback function that receives each file’s name, a struct stat for the file, the file’s type, and information about the file’s name and level in the hierarchy. This function can then do whatever is necessary for each file. The Coreutils 5.0 version of GNU du uses an extended version of nftw() to do its job. • Finally, the chroot() system call changes a process’s current root directory. This is a specialized but important facility, which is particularly useful for certain daemon-style programs.
Exercises 1. Examine the mount(2) manpage under GNU/Linux and on as many other different Unix systems as you have access to. How do the system calls differ? 2. Enhance ch08-statvfs.c to take an option giving an open integer file descriptor; it should use fstatvfs() to retrieve filesystem information.
8.8 Exercises
279
3. Enhance ch08-statvfs.c to not ignore NFS-mounted filesystems. Such filesystems have a device of the form server.example.com:/big/disk. 4. Modify ch08-statfs.c (the one that uses the Linux-specific statfs() call) to produce output that looks like that from df. 5. Add a -i option to the program you wrote for the previous exercise to produce output like that of ‘df -i’. 6. Using opendir(), readdir(), stat() or fstat(), dirfd(), and fchdir(), write your own version of getcwd(). How will you compute the total size the buffer needs to be? How will you move through the directory hierarchy? 7. Enhance your version of getcwd() to allocate a buffer for the caller if the first argument is NULL. 8. Can you use nftw() to write getcwd()? If not, why not? 9. Using nftw(), write your own version of chown that accepts a -R option to recursively process entire directory trees. Make sure that without -R, ‘chown user directory ’ does not recurse. How will you test it? 10. The BSD fts() (“file tree stream”) suite of routines provides a different way to process directory hierarchies. It has a somewhat heftier API, in terms of both the number of functions and the struct it makes available to the user-level function that calls it. These functions are available as a standard part of GLIBC. Read the fts(3) manpage. (It may help you to print it and have it handy.) Rewrite your private version of chown to use fts(). 11. Look at the find(1) manpage. If you were to try to write find from scratch, which file tree suite would you prefer, nftw() or fts()? Why?
This page intentionally left blank
II Processes, IPC, and Internationalization
Chapter 9
Process Management and Pipes
page 283
Chapter 10 Signals
page 347
Chapter 11 Permissions and User and Group ID Numbers
page 403
Chapter 12 General Library Interfaces — Part 2
page 427
Chapter 13 Internationalization and Localization
page 485
Chapter 14 Extended Interfaces
page 529
281
This page intentionally left blank
9 Process Management and Pipes
In this chapter 9.1 Process Creation and Management
page 284
9.2 Process Groups
page 312
9.3 Basic Interprocess Communication: Pipes and FIFOs
page 315
9.4 File Descriptor Management
page 320
9.5 Example: Two-Way Pipes in gawk
page 337
9.6 Suggested Reading
page 341
9.7 Summary
page 342
Exercises
page 344
283
s we said in Chapter 1, “Introduction,” page 3, if you were to summarize Unix (and thus Linux) in three words, they would have to be “files and processes.” Now that we’ve seen how to work with files and directories, it’s time to look at the rest of the story: processes. In particular, we examine how processes are created and managed, how they interact with open files, and how they can communicate with each other. Subsequent chapters examine signals—a coarse way for one process (or the kernel) to let another know that some event has occurred—and permission checking.
A
In this chapter the picture begins to get more complicated. In particular, to be fairly complete, we must mention things that aren’t covered until later in the chapter or later in the book. In such cases, we provide forward references, but you should be able to get the gist of each section without looking ahead.
9.1 Process Creation and Management Unlike many predecessor and successor operating systems, process creation in Unix was intended to be (and is) cheap. Furthermore, Unix separated the idea of “create a new process” from that of “run a given program in a process.” This was an elegant design decision, one that simplifies many operations.
9.1.1 Creating a Process: fork() The first step in starting a new program is calling fork(): #include #include
POSIX
pid_t fork(void);
Using fork() is simple. Before the call, one process, which we term the parent, is running. When fork() returns, there are two processes: the parent and the child. Here is the key: The two processes both run the same program. The two processes can distinguish themselves based on the return value from fork(): Negative If there is an error, fork() returns -1, and no new process is created. The original process continues running. Zero In the child, fork() returns 0. 284
9.1 Process Creation and Management
285
Positive In the parent, fork() returns the positive process identification number (PID) of the child. Boilerplate code for creating a child process looks like this: pid_t child; if ((child = fork()) < 0) /* handle error */ else if (child == 0) /* this is the new process */ else /* this is the original parent process */
The pid_t is a signed integer type for holding PID values. It is most likely a plain int, but it makes code more self-documenting and should be used instead of int. In Unix parlance, besides being the name of a system call, the word “fork” is both a verb and a noun. We might say that “one process forks another,” and that “after the fork, two processes are running.” (Think “fork in a road” and not “fork, knife and spoon.”) 9.1.1.1 After the fork(): Shared and Distinct Attributes The child “inherits” identical copies of a large number of attributes from the parent. Many of these attributes are specialized and irrelevant here. Thus, the following list in purposely incomplete. The following attributes are the relevant ones: • The environment; see Section 2.4, “The Environment,” page 40. • All open files and open directories; see Section 4.4.1, “Understanding File Descriptors,” page 92, and see Section 5.3.1, “Basic Directory Reading,” page 133. • The umask setting; see Section 4.6, “Creating Files,” page 106. • The current working directory; see Section 8.4.1, “Changing Directory: chdir() and fchdir(),” page 256. • The root directory; see Section 8.6, “Changing the Root Directory: chroot(),” page 276. • The current priority (a.k.a. “nice value”; we discuss this shortly; see Section 9.1.3, “Setting Process Priority: nice(),” page 291).
286
Chapter 9 • Process Management and Pipes
• The controlling terminal. This is the terminal device (physical console or terminalemulator window) that is allowed to send signals to a process (such as CTRL-Z to stop running jobs). This is discussed later, in Section 9.2.1, “Job Control Overview,” page 312. • The process signal mask and all current signal dispositions (not discussed yet; see Chapter 10, “Signals,” page 347). • The real, effective, and saved set-user and set-group IDs and the supplemental group set (not discussed yet; see Chapter 11, “Permissions and User and Group ID Numbers,” page 403). Besides the fork() return value, the two processes differ in the following ways: • Each one has a unique process ID and parent process ID (PID and PPID). These are described in Section 9.1.2, “Identifying a Process: getpid() and getppid(),” page 289. • The child’s PID will not equal that of any existing process group ID (see Section 9.2, “Process Groups,” page 312). • The accumulated CPU times for the child process and its future children are initialized to zero. (This makes sense; after all, it is a brand-new process.) • Any signals that were pending in the parent are cleared in the child, as are any pending alarms or timers. (We haven’t covered these topics yet; see Chapter 10, “Signals,” page 347, and see Section 14.3.3, “Interval Timers: setitimer() and getitimer(),” page 546.) • File locks held by the parent are not duplicated in the child (also not discussed yet; see Section 14.2, “Locking Files,” page 531). 9.1.1.2 File Descriptor Sharing The attributes that the child inherits from the parent are all set to the same values they had in the parent at the time of the fork(). From then on, though, the two processes proceed on their merry ways, (mostly) independent of each other. For example, if the child changes directory, the parent’s directory is not affected. Similarly, if the child changes its environment, the parent’s environment is not changed.
9.1 Process Creation and Management
287
Open files are a significant exception to this rule. Open file descriptors are shared, and an action by one process on a shared file descriptor affects the state of the file for the other process as well. This is best understood after study of Figure 9.1. PID 42 Descriptor tables, per process:
0
1
PID 45
2
0
1
2
Bookkeeping
Bookkeeping
Bookkeeping
Bookkeeping
Offset
Offset
Offset
Offset
File table:
File contents:
FIGURE 9.1 File descriptor sharing
The figure displays the kernel’s internal data structures. The key data structure is the file table. Each element refers to an open file. Besides other bookkeeping data, the file table maintains the current position (read/write offset) in the file. This is adjusted either automatically each time a file is read or written or directly with lseek() (see Section 4.5, “Random Access: Moving Around within a File,” page 102). The file descriptor returned by open() or creat() acts as an index into a per-process array of pointers into the file table. This per-process array won’t be any larger than the value returned by getdtablesize() (see Section 4.4.1, “Understanding File Descriptors,” page 92). Figure 9.1 shows two processes sharing standard input and standard output; for each, both point to the same entries in the file table. Thus, when process 45 (the child) does a read(), the shared offset is updated; the next time process 42 (the parent) does a read(), it starts at the position where process 45’s read() finished. This can be seen easily at the shell level:
288
$ cat data line 1 line 2 line 3 line 4 $ ls -l test1 ; cat test1 -rwxr-xr-x 1 arnold devel #! /bin/sh read line ; echo p: $line ( read line ; echo c: $line ) read line ; echo p: $line $ test1 < data p: line 1 c: line 2 p: line 3
Chapter 9 • Process Management and Pipes
Show demo data file contents
Mode and contents of test program 93 Oct 20 22:11 test1
Read a line in parent shell, print it Read a line in child shell, print it Read a line in parent shell, print it Run the program Parent starts at beginning Child picks up where parent left off Parent picks up where child left off
The first executable line of test1 reads a line from standard input, changing the offset in the file. The second line of test1 runs the commands enclosed between the parentheses in a subshell. This is a separate shell process created—you guessed it—with fork(). The child subshell inherits standard input from the parent, including the current file offset. This process reads a line and updates the shared offset into the file. When the third line, back in the parent shell, reads the file, it starts where the child left off. Although the read command is built into the shell, things work the same way with external commands. Some early Unix systems had a line command that read one line of input (one character at a time!) for use within shell scripts; if the file offset weren’t shared, it would be impossible to use such a command in a loop. File descriptor sharing and inheritance play a pivotal role in shell I/O redirection; the system calls and their semantics make the shell-level primitives straightforward to implement in C, as we see later in the chapter. 9.1.1.3 File Descriptor Sharing and close() The fact that multiple file descriptors can point at the same open file has an important consequence: A file is not closed until all its file descriptors are closed. We see later in the chapter that multiple descriptors for the same file can exist not only across processes but even within the same process; this rule is particularly important for working with pipes. If you need to know if two descriptors are open on the same file, you can use fstat() (see Section 5.4.2, “Retrieving File Information,” page 141) on the two descriptors
9.1 Process Creation and Management
289
with two different struct stat structures. If the corresponding st_dev and st_ino fields are equal, they’re the same file. We complete the discussion of file descriptor manipulation and the file descriptor table later in the chapter.
9.1.2 Identifying a Process: getpid() and getppid() Each process has a unique process ID number (the PID). Two system calls provide the current PID and the PID of the parent process: POSIX
#include #include pid_t getpid(void); pid_t getppid(void);
The functions are about as simple as they come: pid_t getpid(void)
Returns the PID of the current process.
pid_t getppid(void)
Returns the parent’s PID.
PID values are unique; by definition there cannot be two running processes with the same PID. PIDs usually increase in value, in that a child process generally has a higher PID than its parent. On many systems, however, PID values wrap around; when the system maximum value for PIDs is exceeded, the next process created will have the lowest unused PID number. (Nothing in POSIX requires this behavior, and some systems assign unused PID numbers randomly.) If the parent dies or exits, the child is given a new parent, init. In this case, the new parent PID will be 1, which is init’s PID. Such a child is termed an orphan. The following program, ch09-reparent.c, demonstrates this. This is also the first example we’ve seen of fork() in action: 1 2 3 4 5 6 7 8 9
/* ch09-reparent.c --- show that getppid() can change values */ #include #include #include #include
/* main --- do the work */
Chapter 9 • Process Management and Pipes
290
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37
int main(int argc, char **argv) { pid_t pid, old_ppid, new_ppid; pid_t child, parent; parent = getpid();
/* before fork() */
if ((child = fork()) < 0) { fprintf(stderr, "%s: fork of child failed: %s\n", argv[0], strerror(errno)); exit(1); } else if (child == 0) { old_ppid = getppid(); sleep(2); /* see Chapter 10 */ new_ppid = getppid(); } else { sleep(1); exit(0); /* parent exits after fork() */ } /* only the child executes this */ printf("Original parent: %d\n", parent); printf("Child: %d\n", getpid()); printf("Child's old ppid: %d\n", old_ppid); printf("Child's new ppid: %d\n", new_ppid); exit(0); }
Line 15 retrieves the PID of the initial process, using getpid(). Lines 17–20 fork the child, checking for an error return. Lines 21–24 are executed by the child: Line 22 retrieves the PPID. Line 23 suspends the process for two seconds (see Section 10.8.1, “Alarm Clocks: sleep(), alarm(), and SIGALRM,” page 382, for information about sleep()), and then line 24 retrieves the PPID again. Lines 25–27 run in the parent. Line 26 delays the parent for one second, giving the child enough time to make the first getppid() call. Line 27 then exits the parent. Lines 31–34 print the values. Note that the parent variable, which was set before the fork, still maintains its value in the child. After forking, the two processes have identical but independent copies of their address spaces. Here’s what happens when the program runs:
9.1 Process Creation and Management
$ ch09-reparent $ Original parent: 6582 Child: 6583 Child's old ppid: 6582 Child's new ppid: 1
291
Run the program Program finishes: shell prompts and child prints
Remember that the two programs execute in parallel. This is depicted graphically in Figure 9.2. PID 6582
PID 6583
Initially, only one process
Time 0
child = fork();
1
sleep(1);
old_ppid = getppid();
Parent sleeps, child calls getppid()
2
exit(0);
sleep(2);
Parent exits, child sleeps
Continues sleeping
Reparent child while it's asleep
new_ppid = getppid();
Orphan child calls getppid()
3 4
6583 reparented
Create child
FIGURE 9.2 Two processes running in parallel after forking
NOTE The use of sleep() to have one process outlive another works most of the time. However, occasionally it fails, leading to hard-to-reproduce and hard-to-find bugs. The only way to guarantee correct behavior is explicit synchronization with wait() or waitpid(), which are described further on in the chapter (see Section 9.1.6.1, “Using POSIX Functions: wait() and waitpid(),” page 306).
9.1.3 Setting Process Priority: nice() As processes run, the kernel dynamically changes each process’s priority. As in life, higher-priority items get attention before lower-priority ones. In brief, each process is allotted a small amount of time in which to run, called its time slice. When the time slice finishes, if the current process is still the one with the highest priority, it is allowed to continue running. Linux, like Unix, provides preemptive multitasking. This means that the kernel can preempt a process (pause it) if it’s time to let another process run. Processes that have been running a lot (for example, compute-intensive processes) have their priority lowered
292
Chapter 9 • Process Management and Pipes
at the end of their time slice, to let other processes have a chance at the processor. Similarly, processes that have been idle while waiting for I/O (such as an interactive text editor) are given a higher priority so that they can respond to the I/O when it happens. In short, the kernel makes sure that all processes, averaged over time, get their “fair share” of the CPU. Raising and lowering priorities are part of this process. Designing a good process scheduler for the kernel is an art; the nitty-gritty details are beyond the scope of this book. However, a process can influence the kernel’s priority assignment algorithm by way of its nice value. The nice value is an indication of “how nice” the process is willing to be toward other processes. Thus, higher nice values indicate increasingly more patient processes; that is, ones that are increasingly nice toward others, lowering their priority with respect to that of other processes. A negative nice value, on the other hand, indicates that a process wishes to be “less nice” towards others. Such a process is more selfish, wanting more CPU time for itself.1 Fortunately, while users can increase their nice value (be more nice), only root can decrease the nice value (be less nice). The nice value is only one factor in the equation used by the kernel to compute the priority; the nice value is not the priority itself, which varies over time, based on the process’s behavior and the state of other processes in the system. To change the nice value, use the nice() system call: #include
XSI
int nice(int inc);
The default nice value is 0. The allowed range for nice values is -20 to 19. This takes some getting used to. The more negative the value, the higher the process’s priority: -20 is the highest priority (least nice), and 19 is the lowest priority (most nice). The inc argument is the increment by which to change the nice value. Use ‘nice(0)’ to retrieve the current value without changing it. If the result of ‘current_nice_value + inc’ would be outside the range -20 to 19, the system forces the result to be inside the range.
1 Such processes often display childlike behavior.
9.1 Process Creation and Management
293
The return value is the new nice value or -1 if there was an error. Since -1 is also a valid nice value, when calling nice() you must explicitly set errno to zero first, and then check it afterwards to see if there was a problem: int niceval; int inc = /* whatever */; errno = 0; if ((niceval = nice(inc)) < 0 && errno != 0) { fprintf(stderr, "nice(%d) failed: %s\n", inc, strerror(errno)); /* other recovery */ }
This example can fail if inc has a negative value and the process is not running as root. 9.1.3.1 POSIX vs. Reality The nice value range of -20 to 19 that Linux uses is historical; it dates back at least as far as V7. POSIX expresses the situation in more indirect language, which allows for implementation flexibility while maintaining historical compatibility. It also makes the standard harder to read and understand, but then, that’s why you’re reading this book. So, here’s how POSIX describes it. First, the process’s nice value as maintained by the system ranges from 0 to ‘(2 * NZERO) - 1’. The constant NZERO is defined in and must be at least 20. This gives us the range 0–39. Second, as we described, the sum of the current nice value and the incr increment is forced into this range. Finally, the return value from nice() is the process nice value minus NZERO. With an NZERO value of 20, this gives us the original -20 to 19 range that we initially described. The upshot is that nice()’s return value actually ranges from ‘-NZERO’ to ‘NZERO-1’, and it’s best to write your code in terms of that symbolic constant. However, practically speaking, you’re unlikely to find a system where NZERO is not 20.
9.1.4 Starting New Programs: The exec() Family Once a new process is running (through fork()), the next step is to start a different program running in the process. There are multiple functions that serve different purposes:
Chapter 9 • Process Management and Pipes
294
POSIX
#include int execve(const char *filename, char *const argv[], char *const envp[]); int int int int int
System call
Wrappers execl(const char *path, const char *arg, ...); execlp(const char *file, const char *arg, ...); execle(const char *path, const char *arg, ..., char *const envp[]); execv(const char *path, char *const argv[]); execvp(const char *file, char *const argv[]);
We refer to these functions as the “exec() family.” There is no function named exec(); instead we use this function name to mean any of the above listed functions. As with fork(), “exec” is used in Unix parlance as a verb, meaning to execute (run) a program, and as a noun. 9.1.4.1 The execve() System Call The simplest function to explain is execve(). It is also the underlying system call. The others are wrapper functions, as is explained shortly. int execve(const char *filename, char *const argv[], char *const envp[]) filename is the name of the program to execute. It may be
a full or relative pathname. The file must be in an executable format that the kernel understands. Modern systems uses the ELF (Extensible Linking Format) executable format. GNU/Linux understands ELF and several others. Interpreted scripts can be executed with execve() if they use the ‘#!’ special first line that names the interpreter to use. (Scripts that don’t start with ‘#!’ will fail.) Section 1.1.3, “Executable Files,” page 7, provides an example use of ‘#!’. is a standard C argument list—an array of character pointers to argument strings, including the value to use for argv[0], terminated with a NULL pointer. argv
envp is the environment to use for the new process, with the same layout as environ global variable (see Section 2.4, “The Environment,” page 40). In new program, this environment becomes the initial value of environ.
the the
A call to exec() should not return. If it does, there was a problem. Most commonly, either the requested program doesn’t exist, or it exists but it isn’t executable (ENOENT and EACCES for errno, respectively). Many more things can go wrong; see the execve(2) manpage.
9.1 Process Creation and Management
295
Assuming that the call succeeds, the current contents of the process’s address space are thrown away. (The kernel does arrange to save the argv and envp data in a safe place first.) The kernel loads the executable code for the new program, along with any global and static variables. Next, the kernel initializes the environment with that passed to execve(), and then it calls the new program’s main() routine with the argv array passed to execve(). It counts the number of arguments and passes that value to main() in argc. At that point, the new program is running. It doesn’t know (and can’t find out) what program was running in the process before it. Note that the process ID does not change. Many other attributes remain in place across the exec; we cover this in more detail shortly. In a loose analogy, exec() is to a process what life roles are to a person. At different times during the day, a single person might function as parent, spouse, friend, student or worker, store customer, and so on. Yet it is the same underlying person performing the different roles. So too, the process—its PID, open files, current directory, etc.—doesn’t change, while the particular job it’s doing—the program run with exec()—can. 9.1.4.2 Wrapper Functions: execl() et al. Five additional functions, acting as wrappers, provide more convenient interfaces to execve(). The first group all take a list of arguments, each one passed as an explicit function parameter: int execl(const char *path, const char *arg, ...) The first argument, path, is the pathname of the file to execute. Subsequent arguments, starting with arg, are the individual elements to be placed in argv. As before, argv[0] must be explicitly included. You must pass a terminating NULL pointer as the final argument so that execl() can tell where the argument list
ends. The new program inherits whatever environment is in the current program’s environ variable. int execlp(const char *file, const char *arg, ...) This function is like execl(), but it simulates the shell’s command searching mechanism, looking for file in each directory named in the PATH environment variable. If file contains a / character, this search is not done. If PATH isn’t present
Chapter 9 • Process Management and Pipes
296
in the environment, execlp() uses a default path. On GNU/Linux, the default is ":/bin:/usr/bin" but it may be different on other systems. (Note that the leading colon in PATH means that the current directory is searched first.) Furthermore, if the file is found and has execute permission but cannot be exec’d because it isn’t in a known executable format, execlp() assumes that the program is a shell script, and execs the shell with the filename as an argument. int execle(const char *path, const char *arg, ..., char *const envp[]) This function is also like execl(), but it accepts an additional argument, envp, which becomes the new program’s environment. As with execl(), you must supply the terminating NULL pointer to end the argument list, before envp.
The second group of wrapper functions accepts an argv style array: int execv(const char *path, char *const argv[]) This function is like execve(), but the new program inherits whatever environment is in the current program’s environ variable. int execvp(const char *file, char *const argv[]) This function is like execv(), but it does the same PATH
search that execlp() does. It also does the same falling back to exec’ing the shell if the found file cannot be executed directly.
Table 9.1 summarizes the six exec() functions. TABLE 9.1 Alphabetical exec() family summary
Function
Path search
Uses environ ✓
execl()
✓
execv()
✓
Execute arg list by path search.
✓
Execute with argv. Execute with argv and environment (system call).
execve() execvp()
Execute arg list. Execute arg list with environment.
execle() execlp()
Purpose
✓
✓
Execute with argv by path search.
9.1 Process Creation and Management
297
The execlp() and execvp() functions are best avoided unless you know that the environment variable contains a reasonable list of directories.
PATH
9.1.4.3 Program Names and argv[0] Until now, we have always treated argv[0] as the program name. We know that it may or may not contain a / character, depending on how the program is invoked; if it does, then that’s usually a good clue as to the pathname used to invoke the program. However, as should be clear by now, argv[0] being the filename is only a convention. There’s nothing stopping you from passing an arbitrary string to the exec’d program for argv[0]. The following program, ch09-run.c, demonstrates passing an arbitrary string: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
/* ch09-run.c --- run a program with a different name and any arguments */ #include #include #include /* main --- adjust argv and run named program */ int main(int argc, char **argv) { char *path; if (argc < 3) { fprintf(stderr, "usage: %s path arg0 [ arg ... ]\n", argv[0]); exit(1); } path = argv[1]; execv(path, argv + 2);
/* skip argv[0] and argv[1] */
fprintf(stderr, "%s: execv() failed: %s\n", argv[0], strerror(errno)); exit(1); }
The first argument is the pathname of the program to run and the second is the new name for the program (which most utilities ignore, other than for error messages); any other arguments are passed on to the program being exec’d. Lines 13–16 do error checking. Line 18 saves the path in path. Line 20 does the exec; if lines 22–23 run, it’s because there was a problem. Here’s what happens when we run the program:
298
Chapter 9 • Process Management and Pipes
$ ch09-run /bin/grep whoami foo a line a line with foo in it a line with foo in it ^D
Run grep Input line doesn’t match Input line that does match It’s printed EOF
$ ch09-run nonexistent-program foo bar ch09-run: execv() failed: No such file or directory
Demonstrate failure
This next example is a bit bizarre: we have ch09-run run itself, passing ‘foo’ as the program name. Since there aren’t enough arguments for the second run, it prints the usage message and exits: $ ch09-run ./ch09-run foo usage: foo path arg0 [ arg ... ]
While not very useful, ch09-run clearly shows that argv[0] need not have any relationship to the file that is actually run. In System III (circa 1980), the cp, ln, and mv commands were one executable file, with three links by those names in /bin. The program would examine argv[0] and decide what it should do. This saved a modest amount of disk space, at the expense of complicating the source code and forcing the program to choose a default action if invoked by an unrecognized name. (Some current commercial Unix systems continue this practice!) Without stating an explicit reason, the GNU Coding Standards recommends that a program not base its behavior upon its name. One reason we see is that administrators often install the GNU version of a utility alongside the standard ones on commercial Unix systems, using a g prefix: gmake, gawk, and so on. If such programs expect only the standard names, they’ll fail when run with a different name. Also, today, disk space is cheap; if two almost identical programs can be built from the same source code, it’s better to do it that way, using #ifdef or what-have-you. For example, grep and egrep share considerable code, but the GNU version builds two separate executables. 9.1.4.4 Attributes Inherited across exec() As with fork(), a number of attributes remain in place after a program does an exec: • All open files and open directories; see Section 4.4.1, “Understanding File Descriptors,” page 92, and see Section 5.3.1, “Basic Directory Reading,” page 133. (This doesn’t include files marked close-on-exec, as described later in the chapter; see Section 9.4.3.1, “The Close-on-exec Flag,” page 329.)
9.1 Process Creation and Management
299
• The umask setting; see Section 4.6, “Creating Files,” page 106. • The current working directory; see Section 8.4.1, “Changing Directory: chdir() and fchdir(),” page 256. • The root directory; see Section 8.6, “Changing the Root Directory: chroot(),” page 276. • The current nice value. • The process ID and parent process ID. • The process group ID; see Section 9.2, “Process Groups,” page 312. • The session ID and the controlling terminal; for both, see Section 9.2.1, “Job Control Overview,” page 312. • The process signal mask and any pending signals, as well as any unexpired alarms or timers (not discussed yet; see Chapter 10, “Signals,” page 347). • The real user ID and group IDs and the supplemental group set. The effective user and group IDs (and thus the saved set-user and set-group IDs) can be set by the setuid and setgid bits on the file being exec’d. (None of this has been discussed yet; see Chapter 11, “Permissions and User and Group ID Numbers,” page 403). • File locks remain in place (also not discussed yet; see Section 14.2, “Locking Files,” page 531). • Accumulated CPU times for the process and its children don’t change. After an exec, signal disposition changes; see Section 10.9, “Signals Across fork() and exec(),” page 398, for more information. All open files and directories remain open and available after the exec. This is how programs inherit standard input, output, and error: They’re in place when the program starts up. Most of the time, when you fork and exec a separate program, you don’t want it to inherit anything but file descriptors 0, 1, and 2. In this case, you can manually close all other open files in the child, after the fork but before the exec. Alternatively, you can mark a file descriptor to be automatically closed by the system upon an exec; this latter option is discussed later in the chapter (see Section 9.4.3.1, “The Close-on-exec Flag,” page 329).
Chapter 9 • Process Management and Pipes
300
9.1.5 Terminating a Process Process termination involves two steps: The process exits, passing an exit status to the system, and the parent process recovers the information. 9.1.5.1 Defining Process Exit Status The exit status (also known variously as the exit value, return code, and return value) is an 8-bit value that the parent can recover when the child exits (in Unix parlance, “when the child dies”). By convention, an exit status of 0 means that the program ran with no problems. Any nonzero exit status indicates some sort of failure; the program determines the values to use and their meanings, if any. (For example, grep uses 0 to mean that it matched the pattern at least once, 1 to mean that it did not match the pattern at all, and 2 to mean that an error occurred.) This exit status is available at the shell level (for Bourne-style shells) in the special variable $?. The C standard defines two constants, which are all you should use for strict portability to non-POSIX systems: EXIT_SUCCESS
The program exited with no problems. Zero can also be used to mean success. EXIT_FAILURE
The program had some kind of problem. In practice, using only these values is rather constraining. Instead, you should pick a small set of return codes, document their meanings, and use them. (For example, 1 for command-line option and argument errors, 2 for I/O errors, 3 for bad data errors, and so on.) For readability, it pays to use #defined constants or an enum for them. Having too large a list of errors makes using them cumbersome; most of the time the invoking program (or user) only cares about zero vs. nonzero. When the binary success/failure distinction is adequate, the pedantic programmer uses EXIT_SUCCESS and EXIT_FAILURE. Our own style is more idiomatic, using the explicit constants 0 or 1 with return and exit(). This is so common that it is learned early on and quickly becomes second nature. However, you should make your own decision for your own projects.
9.1 Process Creation and Management
301
NOTE Only the least-significant eight bits of the value are available to the parent process. Thus, you should use values in the range 0–255. As we’ll see shortly, 126 and 127 have a conventional meaning (above and beyond plain “unsuccessful”), to which your programs should adhere. Since only the least-significant eight bits matter, you should never use a negative exit status. When the last eight bits are retrieved from small negative numbers, they become large positive values! (For example, –1 becomes 255, and –5 becomes 251.) We have seen C programming books that get this wrong—don’t be misled.
9.1.5.2 Returning from main() A program can terminate voluntarily in one of two ways: by using one of the functions described next or by returning from main(). (A third, more drastic, way, is described later, in Section 12.4, “Committing Suicide: abort(),” page 445.) In the latter case, you should use an explicit return value instead of falling off the end of the function: /* Good: */ int main(int argc, char **argv) { /* code here */ return 0; }
/* Bad: */ int main(int argc, char **argv) { /* code here */ /* ?? What does main() return ?? */ }
The 1999 C standard indicates that when main() returns by falling off the end, the behavior is as if it had returned 0. (This is also true for C++; however, the 1989 C standard leaves this case purposely undefined.) In all cases, it’s poor practice to rely on this behavior; one day you may be programming for a system with meager C runtime support or an embedded system, or somewhere else where it will make a difference. (In general, falling off the end of any non-void function is a bad idea; it can only lead to buggy code.) The value returned from main() is automatically passed back to the system, from which the parent can recover it later. We describe how in Section 9.1.6.1, “Using POSIX Functions: wait() and waitpid(),” page 306. NOTE On GNU/Linux systems, the c99 compiler-driver command runs the compiler with the appropriate options such that the return value when falling off the end is 0. Plain gcc doesn’t do this.
Chapter 9 • Process Management and Pipes
302
9.1.5.3 Exiting Functions The other way to voluntarily terminate a program is by calling an exiting function. The C standard defines the following functions: #include
ISO C
void exit(int status); void _Exit(int status); int atexit(void (*function)(void));
The functions work as follows: void exit(int status)
This function terminates the program. status is passed to the system for recovery by the parent. Before the program exits, exit() calls all functions registered with atexit(), flushes and closes all open FILE * streams, and removes any temporary files created with tmpfile() (see Section 12.3.2, “Creating and Opening Temporary Files (Good),” page 441). When the process exits, the kernel closes any remaining open files (those opened by open(), creat(), or file descriptor inheritance), frees up its address space, and releases any other resources it may have been using. exit() never returns. void _Exit(int status)
This function is essentially identical to the POSIX _exit() function; we delay discussion of it for a short while. int atexit(void (*function)(void)) function is a pointer to a callback function to be called at program exit. exit()
invokes the callback function before it closes files and terminates. The idea is that an application can provide one or more cleanup functions to be run before finally shutting down. Providing a function is called registering it. (Callback functions for nftw() were described in Section 8.4.3.2, “The nftw() Callback Function,” page 263; it’s the same idea here, although atexit() invokes each registered function only once.) atexit()
returns 0 on success or -1 on error, and sets errno appropriately.
The following program does no useful work, but it does demonstrate how atexit() works:
9.1 Process Creation and Management
303
/* ch09-atexit.c --- demonstrate atexit(). Error checking omitted for brevity. */ /* * The callback functions here just answer roll call. * In a real application, they would do more. */ void callback1(void) { printf("callback1 called\n"); } void callback2(void) { printf("callback2 called\n"); } void callback3(void) { printf("callback3 called\n"); } /* main --- register functions and then exit */ int main(int argc, char { printf("registering printf("registering printf("registering
**argv) callback1\n"); callback2\n"); callback3\n");
atexit(callback1); atexit(callback2); atexit(callback3);
printf("exiting now\n"); exit(0); }
Here’s what happens when it’s run: $ ch09-atexit registering callback1 registering callback2 registering callback3 exiting now callback3 called callback2 called callback1 called
Main program runs
Callback functions run in reverse order
As the example demonstrates, functions registered with atexit() run in the reverse order in which they were registered: most recent one first. (This is also termed last-in first-out, abbreviated LIFO.) POSIX defines the _exit() function. Unlike exit(), which invokes callback functions and does cleanup, _exit() is the “die immediately” function: #include
POSIX
void _exit(int status);
The status is given to the system, just as for exit(), but the process terminates immediately. The kernel still does the usual cleanup: All open files are closed, the memory used by the address space is released, and any other resources the process was using are also released.
304
Chapter 9 • Process Management and Pipes
In practice, the ISO C _Exit() function is identical to _exit(). The C standard says it’s implementation defined as to whether _Exit() calls functions registered with atexit() and closes open files. For GLIBC systems, it does not, behaving like _exit(). The time to use _exit() is when an exec fails in a forked child. In this case, you don’t want to use regular exit(), since that flushes any buffered data held by FILE * streams. When the parent later flushes its copies of the buffers, the buffered data ends up being written twice; obviously this is not good. For example, suppose you wish to run a shell command and do the fork and exec yourself. Such code would look like this: char *shellcommand = "..."; pid_t child; if ((child = fork()) == 0) { /* child */ execl("/bin/sh", "sh", "-c", shellcommand, NULL); _exit(errno == ENOENT ? 127 : 126); } /* parent continues */
The errno test and exit values follow conventions used by the POSIX shell. If a requested program doesn’t exist (ENOENT—no entry for it in a directory), then the exit value is 127. Otherwise, the file exists but couldn’t be exec’d for some other reason, so the exit status is 126. It’s a good idea to follow this convention in your own programs too. Briefly, to make good use of exit() and atexit(), you should do the following: • Define a small set of exit status values that your program will use to communicate information to its caller. Use #defined constants or an enum for them in your code. • Decide if having callback functions for use with atexit() makes sense. If it does, register them in main() at the appropriate point; for example, after parsing options, and after initializing whatever data structures the callback functions are supposed to clean up. Remember that the functions are called in LIFO (last-in first-out) order. • Use exit() everywhere to exit from the program when something goes wrong, and exiting is the correct action to take. Use the error codes that you defined.
9.1 Process Creation and Management
305
• An exception is main(), for which you can use return if you wish. Our own style is generally to use exit() when there are problems and ‘return 0’ at the end of main() if everything has gone well. • Use _exit() or _Exit() in a child process if exec() fails.
9.1.6 Recovering a Child’s Exit Status When a process dies, the normal course of action is for the kernel to release all its resources. The kernel does retain the dead process’s exit status, as well as information about the resources it used during its lifetime, and the PID continues to be counted as being in use. Such a dead process is termed a zombie. The parent process, be it the original parent or init, can recover the child’s exit status. Or, by use of BSD functions that aren’t standardized by POSIX, the exit status together with the resource usage information can be recovered. Status recovery is done by waiting for the process to die: This is also known as reaping the process.2 There is considerable interaction between the mechanisms that wait for children to die and the signal mechanisms we haven’t described yet. Which one to describe first is a bit of a chicken-and-egg problem; we’ve chosen to talk about the child-waiting mechanisms first, and Chapter 10, “Signals,” page 347, provides the full story on signals. For now, it’s enough to understand that a signal is a way to notify a process that some event has occurred. Processes can generate signals that get sent to themselves, or signals can be sent externally by other processes or by a user at a terminal. For example, CTRL-C sends an “interrupt” signal, and CTRL-Z sends a job control “stop” signal. By default, many signals, such as the interrupt signal, cause the receiving process to die. Others, such as the job control signals, cause it to change state. The child waiting mechanisms can determine whether a process suffered death-by-signal, and, if so, which signal it was. The same is true for processes stopping and, on some systems, when a process continues.
2 We are not making this up. The terminology is indeed rather morbid, but such was the original Unix designers’
sense of humor.
Chapter 9 • Process Management and Pipes
306
9.1.6.1 Using POSIX Functions: wait() and waitpid() The original V7 system call was wait(). The newer POSIX call, based on BSD functionality, is waitpid(). The function declarations are: #include #include
POSIX
pid_t wait(int *status); pid_t waitpid(pid_t pid, int *status, int options);
wait() waits for any child process to die; the information as to how it died is returned in *status. (We discuss how to interpret *status shortly.) The return value is the PID of the process that died or -1 if an error occurred.
If there is no child process, wait() returns -1 with errno set to ECHILD (no child process). Otherwise, it waits for the first child to die or for a signal to come in. The waitpid() function lets you wait for a specific child process to exit. It provides considerable flexibility and is the preferred function to use. It too returns the PID of the process that died or -1 if an error occurred. The arguments are as follows: pid_t pid
The value specifies which child to wait for, both by real pid and by process group. The pid value has the following meanings: pid
< –1
Wait for any child process with a process group ID equal to the absolute value of pid.
pid
= –1
Wait for any child process. This is the way wait() works.
pid
=0
Wait for any child process with a process group ID equal to that of the parent process’s process group.
pid
>0
Wait for the specific process with the PID equal to pid.
int *status
This is the same as for wait(). defines various macros that interpret the value in *status, which we describe soon. int options
This should be either 0 or the bitwise OR of one or more of the following flags:
9.1 Process Creation and Management
307
WNOHANG
If no child has exited, return immediately. That way you can check periodically to see if any children have died. (Such periodic checking is known as polling for an event.) WUNTRACED
Return information about a child process that has stopped but that hasn’t exited yet. (For example, with job control.) WCONTINUED
(XSI.) Return information about a child process that has continued if the status of the child has not been reported since it changed. This too is for job control. This flag is an XSI extension and is not available under GNU/Linux. Multiple macros work on the filled-in *status value to determine what happened. They tend to come in pairs: one macro to determine if something occurred, and if that macro is true, one or more macros that retrieve the details. The macros are as follows: WIFEXITED(status)
This macro is nonzero (true) if the process exited (as opposed to changing state). WEXITSTATUS(status)
This macro gives the exit status; it equals the least-significant eight bits of the value passed to exit() or returned from main(). You should use this macro only if WIFEXITED(status) is true. WIFSIGNALED(status)
This macro is nonzero if the process suffered death-by-signal. WTERMSIG(status)
This macro provides the signal number that terminated the process. You should use this macro only if WIFSIGNALED(status) is true. WIFSTOPPED(status)
This macro is nonzero if the process was stopped. WSTOPSIG(status)
This macro provides the signal number that stopped the process. (Several signals can stop a process.) You should use this macro only if WIFSTOPPED(status) is
Chapter 9 • Process Management and Pipes
308
true. Job control signals are discussed in Section 10.8.2, “Job Control Signals,” page 383. WIFCONTINUED(status)
(XSI.) This macro is nonzero if the process was continued. There is no corresponding WCONTSIG() macro, since only one signal can cause a process to continue. Note that this macro is an XSI extension. In particular, it is not available on GNU/Linux. Therefore, if you wish to use it, bracket your code inside ‘#ifdef WIFCONTINUED … #endif’. WCOREDUMP(status)
(Common.) This macro is nonzero if the process dumped core. A core dump is the memory image of a running process created when the process terminates. It is intended for use later for debugging. Unix systems name the file core, whereas GNU/Linux systems use core.pid, where pid is the process ID of the process that died. Certain signals terminate a process and produce a core dump automatically. Note that this macro is nonstandard. GNU/Linux, Solaris, and BSD systems support it, but some other Unix systems do not. Thus, here too, if you wish to use it, bracket your code inside ‘#ifdef WCOREDUMP … #endif’. Most programs don’t care why a child process died; they merely care that it died, perhaps noting if it exited successfully or not. The GNU Coreutils install program demonstrates such straightforward use of fork(), execlp(), and wait(). The -s option causes install to run the strip program on the binary executable being installed. (strip removes debugging and other information from an executable file. This can save considerable space, relatively speaking. On modern systems with multi-gigabyte disk drives, it’s rarely necessary to strip executables upon installation.) Here is the strip() function from install.c: 513 514 515 516 517 518
/* Strip the symbol table from the file PATH. We could dig the magic number out of the file first to determine whether to strip it, but the header files and magic numbers vary so much from system to system that making it portable would be very difficult. Not worth the effort. */
9.1 Process Creation and Management
519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542
309
static void strip (const char *path) { int status; pid_t pid = fork (); switch (pid) { case -1: error (EXIT_FAILURE, errno, _("fork system call failed")); break; case 0: /* Child. */ execlp ("strip", "strip", path, NULL); error (EXIT_FAILURE, errno, _("cannot run strip")); break; default: /* Parent. */ /* Parent process. */ while (pid != wait (&status)) /* Wait for kid to finish. */ /* Do nothing. */ ; if (status) error (EXIT_FAILURE, 0, _("strip failed")); break; } }
Line 523 calls fork(). The switch statement then takes the correct action for error return (lines 527–529), child process (lines 530–533), and parent process (lines 534–539). The idiom on lines 536–537 is common; it waits until the specific child of interest exits. wait()’s return value is the PID of the reaped child. This is compared with that of the forked child. status is unused other than to see if it’s nonzero (line 538), in which case the child exited unsuccessfully. (The test, while correct, is coarse but simple. A test like ‘if (WIFEXITED(status) && WEXITSTATUS(status) != 0)’ would be more pedantically correct.) From the description and code presented so far, it may appear that parent programs must choose a specific point to wait for any child processes to die, possibly polling in a loop (as install.c does), waiting for all children. In Section 10.8.3, “Parental Supervision: Three Different Strategies,” page 385, we’ll see that this is not necessarily the case. Rather, signals provide a range of mechanisms to use for managing parent notification when a child process dies.
Chapter 9 • Process Management and Pipes
310
9.1.6.2 Using BSD Functions: wait3() and wait4() The BSD wait3() and wait4() system calls are useful if you’re interested in the resources used by a child process. They are nonstandard (meaning not part of POSIX) but widely available, including on GNU/Linux. The declarations are as follows: #include #include #include #include
Common Not needed under GNU/Linux, but improves portability
pid_t wait3(int *status, int options, struct rusage *rusage); pid_t wait4(pid_t pid, int *status, int options, struct rusage *rusage);
The status variable is the same as for wait() and waitpid(). All the macros described earlier (WIFEXITED(), etc.) can also be used with it. The options value is also the same as for waitpid(): either 0 or the bitwise OR of one or both of WNOHANG and WUNTRACED. wait3() behaves like wait(), retrieving information about the first available zombie child, and wait4() is like waitpid(), retrieving information about a particular process. Both return the PID of the reaped child, -1 on error, or 0 if no process is available and WNOHANG was used. The pid argument can take on the same values as the pid argument for waitpid().
The key difference is the struct rusage pointer. If not NULL, the system fills it in with information about the process. This structure is described in POSIX and in the getrusage(2) manpage: struct rusage { struct timeval ru_utime; struct timeval ru_stime; long ru_maxrss; long ru_ixrss; long ru_idrss; long ru_isrss; long ru_minflt; long ru_majflt; long ru_nswap; long ru_inblock; long ru_oublock; long ru_msgsnd; long ru_msgrcv; long ru_nsignals; long ru_nvcsw; long ru_nivcsw; };
/* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /*
user time used */ system time used */ maximum resident set size */ integral shared memory size */ integral unshared data size */ integral unshared stack size */ page reclaims */ page faults */ swaps */ block input operations */ block output operations */ messages sent */ messages received */ signals received */ voluntary context switches */ involuntary context switches */
9.1 Process Creation and Management
311
Pure BSD systems (4.3 Reno and later) support all of the fields. Table 9.2 describes the availability of the various fields in the struct rusage for POSIX and Linux. TABLE 9.2 Availability of struct rusage fields
Field
POSIX
Linux
Field
POSIX
Linux
ru_utime
✓
≥ 2.4
ru_nswap
≥ 2.4
ru_stime
✓
≥ 2.4
ru_nvcsw
≥ 2.6
ru_minflt
≥ 2.4
ru_nivcsw
≥ 2.6
ru_majflt
≥ 2.4
Only the fields marked “POSIX” are defined by the standard. While Linux defines the full structure, the 2.4 kernel maintains only the user-time and system-time fields. The 2.6 kernel also maintains the fields related to context switching.3 The fields of most interest are ru_utime and ru_stime, the user and system CPU times, respectively. (User CPU time is time spent executing user-level code. System CPU time is time spent in the kernel on behalf of the process.) These two fields use a struct timeval, which maintains time values down to microsecond intervals. See Section 14.3.1, “Microsecond Times: gettimeofday(),” page 544, for more information on this structure. In 4.2 and 4.3 BSD, the status argument to wait() and wait3() was a union wait. It fit into an int and provided access to the same information as the modern WIFEXITED() etc. macros do, but through the union’s members. Not all members were valid in all situations. The members and their uses are described in Table 9.3. POSIX doesn’t standardize the union wait, and 4.4 BSD doesn’t document it, instead using the POSIX macros. GLIBC jumps through several hoops to make old code using it continue to work. We describe it here primarily so that you’ll recognize it if you see it; new code should use the macros described in Section 9.1.6.1, “Using POSIX Functions: wait() and waitpid(),” page 306.
3 Double-check the getrusage(2) manpage if your kernel is newer, because this behavior may have changed.
Chapter 9 • Process Management and Pipes
312
TABLE 9.3 The 4.2 and 4.3 BSD union wait
POSIX macro
Union member
WIFEXITED()
w_termsig
Usage
Meaning
w.w_termsig == 0
True if normal exit.
WEXITSTATUS() w_retcode
code = w.w_retcode
Exit status if not by signal.
WIFSIGNALED() w_termsig
w.w_termsig != 0
True if death by signal.
WTERMSIG()
w_termsig
sig = w.w_termsig
Signal that caused termination.
WIFSTOPPED()
w_stopval
w.w_stopval == WSTOPPED True if stopped.
WSTOPSIG()
w_stopsig
sig = w.w_stopsig
Signal that caused stopping.
WCOREDUMP()
w_coredump
w.w_coredump != 0
True if child dumped core.
9.2 Process Groups A process group is a group of related processes that should be treated together for job control purposes. Processes with the same process group ID are members of the process group, and the process whose PID is the same as the process group ID is the process group leader. New processes inherit the process group ID of their parent process. We have already seen that waitpid() allows you to wait for any process in a given process group. In Section 10.6.7, “Sending Signals: kill() and killpg(),” page 376, we’ll also see that you can send a signal to all the processes in a particular process group as well. (Permission checking always applies; you can’t send a signal to a process you don’t own.)
9.2.1 Job Control Overview Job control is an involved topic, one that we’ve chosen not to delve into for this volume. However, here’s a quick conceptual overview. The terminal device (physical or otherwise) with a user working at it is called the controlling terminal. A session is a collection of process groups associated with the controlling terminal. There is only one session per terminal, with multiple process groups in the session. One
9.2 Process Groups
313
process is designated the session leader; this is normally a shell that can do job control, such as Bash, pdksh, zsh, or ksh93.4 We refer to such a shell as a job control shell. Each job started by a job control shell, be it a single program or a pipeline, receives a separate process group identifier. That way, the shell can manipulate the job as a single entity, although it may have multiple processes. The controlling terminal also has a process group identifier associated with it. When a user types a special character such as CTRL-C for “interrupt” or CTRL-Z for “stop,” the kernel sends the given signal to the processes in the terminal’s process group. The process group whose process group ID is the same as that of the controlling terminal is allowed to read from and write to the terminal. This is called the foreground process group. (It also receives the keyboard-generated signals.) Any other process groups in the session are background process groups and cannot read from or write to the terminal; they receive special signals that stop them if they try. Jobs move in and out of the foreground, not by a change to an attribute of the job, but rather by a change to the controlling terminal’s process group. It is the job control shell that makes this change, and if the new process group was stopped, the shell continues it by sending a “continue” signal to all members of the process group. In days of yore, users often used serial terminals connected to modems to dial in to centralized minicomputer Unix systems. When the user closed the connection (hung up the phone), the serial line detected the disconnection and the kernel sent a “hangup” signal to all processes connected to the terminal. This concept remains: If a hangup occurs (serial hardware does still exist and is still in use), the kernel sends the hangup signal to the foreground process group. If the session leader exits, the same thing happens. An orphaned process group is one where, for every process in the group, that process’s parent is also in the group or the parent is in a different session. (This can happen if a job control shell exits with background jobs running.) Running processes in an orphaned process group are allowed to run to completion. If there are any already stopped processes in an orphaned process group when it becomes orphaned, the kernel sends those
4 Well, csh and tcsh can be included in this category too, but we prefer Bourne-style shells.
314
Chapter 9 • Process Management and Pipes
processes a hangup signal and then a continue signal. This causes them to wake up so that they can exit instead of remaining stopped forever.
9.2.2 Process Group Identification: getpgrp() and getpgid() For compatibility with older systems, POSIX provides multiple ways to retrieve process group information: #include pid_t getpgrp(void); pid_t getpgid(pid_t pid);
POSIX XSI
The getpgrp() function returns the current process’s process group ID. getpgid() is an XSI extension. It returns the process group ID of the given process pid. A pid of 0 means “the current process’s process group.” Thus ‘getpgid(0)’ is the same as ‘getpgrp()’. For general programming, getpgrp() should be used. 4.2 and 4.3 BSD also have a getpgrp() function, but it acts like the POSIX getpgid() function, requiring a pid argument. Since modern systems support POSIX, you should use the POSIX version in new code. (If you think this is confusing, you’re right. Multiple ways to do the same thing are a normal result of design-by-committee, since the committee feels that it must please everyone.)
9.2.3 Process Group Setting: setpgid() and setpgrp() Two functions set the process group: #include int setpgid(pid_t pid, pid_t pgid); int setpgrp(void);
POSIX XSI
The setpgrp() function is simple: It sets the process group ID to be the same as the process ID. Doing so creates a new process group in the same session, and the calling process becomes the process group leader. The setpgid() function is intended for job control use. It allows one process to set the process group of another. A process may change only its own process group ID or the process group ID of a child process, and then only if that child process has not yet done an exec. Job control shells make this call after the fork, in both the parent and the child. For one of them the call succeeds, and the process group ID is changed. (Otherwise, there’s no way to guarantee the ordering, such that the parent could change the
9.3 Basic Interprocess Communication: Pipes and FIFOs
315
child’s process group ID before the child execs. If the parent’s call succeeds first, it can move on to the next task, such as manipulating other jobs or the terminal.) With setpgid(), pgid must be an existing process group that is part of the current session, effectively joining pid to that process group. Otherwise, pgid must be equal to pid, creating a new process group. There are some special case values for both pid and pgid: pid
=0
pgid
=0
In this case, setpgid() changes the process group of the calling process to pgid. It’s equivalent to ‘setpgid(getpid(), pgid)’. This sets the process group ID for the given process to be the same as its PID. Thus, ‘setpgid(pid, 0)’ is the same as ‘setpgid(pid, pid)’. This causes the process with PID pid to become a process group leader.
In all cases, session leaders are special; their PID, process group ID, and session ID values are all identical, and the process group ID of a session leader cannot be changed. (Session IDs are set with setsid() and retrieved with getsid(). These are specialized calls: see the setsid(2) and getsid(2) manpages.)
9.3 Basic Interprocess Communication: Pipes and FIFOs Interprocess communication (IPC) is what it sounds like: a way for two separate processes to communicate. The oldest IPC mechanism on Unix systems is the pipe: a oneway communication channel. Data written into one end of the channel come out the other end.
9.3.1 Pipes Pipes manifest themselves as regular file descriptors. Without going to special lengths, you can’t tell if a file descriptor is a file or a pipe. This is a feature; programs that read standard input and write standard output don’t have to know or care that they may be communicating with another process. Should you need to know, the canonical way to check is to attempt ‘lseek(fd, 0L, SEEK_CUR)’ on the file descriptor; this call attempts to seek zero bytes from the current position, that is, a do-nothing operation.5 This operation fails for pipes and does no damage for other files.
5 Such an operation is often referred to as a no-op, short for “no operation.”
Chapter 9 • Process Management and Pipes
316
9.3.1.1 Creating Pipes The pipe() system call creates a pipe: POSIX
#include int pipe(int filedes[2]);
The argument value is the address of a two-element integer array. pipe() returns 0 upon success and -1 if there was an error. If the call was successful, the process now has two additional open file descriptors. The value in filedes[0] is the read end of the pipe, and filedes[1] is the write end. (A handy mnemonic device is that the read end uses index 0, analogous to standard input being file descriptor 0, and the write end uses index 1, analogous to standard output being file descriptor 1.) As mentioned, data written into the write end are read from the read end. When you’re done with a pipe, you close both ends with a call to close(). The following simple program, ch09-pipedemo.c, demonstrates pipes by creating one, writing data to it, and then reading the data back from it: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
/* ch09-pipedemo.c --- demonstrate I/O with a pipe. */ #include #include #include /* main --- create a pipe, write to it, and read from it. */ int main(int argc, char **argv) { static const char mesg[] = "Don't Panic!"; char buf[BUFSIZ]; ssize_t rcount, wcount; int pipefd[2]; size_t l;
/* a famous message */
if (pipe(pipefd) < 0) { fprintf(stderr, "%s: pipe failed: %s\n", argv[0], strerror(errno)); exit(1); } printf("Read end = fd %d, write end = fd %d\n", pipefd[0], pipefd[1]);
9.3 Basic Interprocess Communication: Pipes and FIFOs
26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46
317
l = strlen(mesg); if ((wcount = write(pipefd[1], mesg, l)) != l) { fprintf(stderr, "%s: write failed: %s\n", argv[0], strerror(errno)); exit(1); } if ((rcount = read(pipefd[0], buf, BUFSIZ)) != wcount) { fprintf(stderr, "%s: read failed: %s\n", argv[0], strerror(errno)); exit(1); } buf[rcount] = '\0'; printf("Read from pipe\n", buf); (void) close(pipefd[0]); (void) close(pipefd[1]); return 0; }
Lines 11–15 declare local variables; of most interest is mesg, which is the text that will traverse the pipe. Lines 17–21 create the pipe, with error checking; lines 23–24 print the values of the new file descriptors (just to prove that they won’t be 0, 1, or 2). Line 26 gets the length of the message, to use with write(). Lines 27–31 write the message down the pipe, again with error checking. Lines 33–37 read the contents of the pipe, again with error checking. Line 39 supplies a terminating zero byte, so that the read data can be used as a regular string. Line 41 prints the data, and lines 42–43 close both ends of the pipe. Here’s what happens when the program runs: $ ch09-pipedemo Read end = fd 3, write end = fd 4 Read 0) { /* wait for children */ if (ret == left_pid) printf("left child terminated, status: %x\n", status); else if (ret == right_pid) printf("right child terminated, status: %x\n", status); else printf("yow! unknown child %d terminated, status %x\n", ret, status); } return 0; }
Lines 22–25 create the pipe. This has to be done first. Lines 27–31 create the left-hand child, and lines 33–37 create the right-hand child. In both instances, the parent continues a linear execution path through main() while the child calls the appropriate function to manipulate file descriptors and do the exec. Lines 39–40 close the parent’s copy of the pipe. Lines 42–50 loop, reaping children, until wait() returns an error.
9.4 File Descriptor Management
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83
325
/* left_child --- do the work for the left child */ void left_child(void) { static char *left_argv[]
= { "echo", "hi", "there", NULL };
close(pipefd[0]); close(1); dup(pipefd[1]); close(pipefd[1]); execvp("echo", left_argv); _exit(errno == ENOENT ? 127 : 126); } /* right_child --- do the work for the right child */ void right_child(void) { static char *right_argv[] = { "sed", "s/hi/hello/g", NULL }; close(pipefd[1]); close(0); dup(pipefd[0]); close(pipefd[0]); execvp("sed", right_argv); _exit(errno == ENOENT ? 127 : 126); }
Lines 57–68 are the code for the left-hand child. The procedure follows the steps given above to close the unneeded end of the pipe, close the original standard output, dup() the pipe’s write end to 1, and then close the original write end. At that point, line 66 calls execvp(), and if it fails, line 67 calls _exit(). (Remember that line 67 is never executed if execvp() succeeds.) Lines 72–83 do the similar steps for the right-hand child. Here’s what happens when it runs: $ ch09-pipeline left child terminated, status: 0 hello there right child terminated, status: 0 $ ch09-pipeline hello there right child terminated, status: 0 left child terminated, status: 0
Run the program Left child finishes before output (!) Output from right child Run the program again Output from right child and … Right child finishes before left one
326
Chapter 9 • Process Management and Pipes
Note that the order in which the children finish isn’t deterministic. It depends on the system load and many other factors that can influence process scheduling. You should be careful to avoid making ordering assumptions when you write code that creates multiple processes, particularly the code that calls one of the wait() family of functions. The whole process is illustrated in Figure 9.5. Figure 9.5 (a) depicts the situation after the parent has created the pipe (lines 22–25) and the two children (lines 27–37). Figure 9.5 (b) shows the situation after the parent has closed the pipe (lines 39–40) and started to wait for the children (lines 42–50). Each child has moved the pipe into place as standard output (left child, lines 61–63) and standard input (lines 76–78). Finally, Figure 9.5 (c) depicts the situation after the children have closed off the original pipe (lines 64 and 79) and called execvp() (lines 66 and 81).
9.4.2 Creating Nonlinear Pipelines: /dev/fd/XX Many modern Unix systems, including GNU/Linux, support special files in the /dev/fd directory.9 These files represent open file descriptors, with names such as /dev/fd/0, /dev/fd/1, and so on. Passing such a name to open() returns a new file descriptor that is effectively the same as calling dup() on the given file descriptor number. These special files find their use at the shell level: The Bash, ksh88 (some versions) and ksh93 shells supply a feature called process substitution that makes it possible to create nonlinear pipelines. The notation at the shell level is ‘(...)’ for output pipelines. For example, suppose you wish to apply the diff command to the output of two commands. You would normally have to use temporary files: command1 > /tmp/out.$$.1 command2 > /tmp/out.$$.2 diff /tmp/out.$$.1 /tmp/out.$$.2 rm /tmp/out.$$.1 /tmp/out.$$.2
With process substitution, it looks like this: 9 On GNU/Linux systems, /dev/fd is a symbolic link to /proc/self/fd, but since /dev/fd is the common
place, that’s what you should use in your code.
9.4 File Descriptor Management
(a)
327
ch09-pipeline
fork()
pipe()
ch09-pipeline
Left-hand child
Right-hand child
ch09-pipeline
wait()
ch09-pipeline
close()
fd 3
close()/dup()
ch09-pipeline
0
close()/dup()
Left-hand child
fd 4
Parent closes pipe, waits for children
wait()
1
fd 4
Pipe is shared among parent and both children
fork()
ch09-pipeline
(b)
fd 3
Children move pipe to stdout (left-hand child) and stdin (right-hand child), close original pipe fds
Right-hand child
ch09-pipeline
(c)
wait()
echo hi there
1
0
execvp() Left-hand child
Children call exec(), programs run
wait()
sed s/hi/hello/g execvp()
Pipe
FIGURE 9.5 Parent creating a pipeline
Right-hand child
hello there
328
Chapter 9 • Process Management and Pipes
diff child restarted f1 Create a new file $ ls -l f1 Check permissions -rw-r--r-1 arnold arnold 36 Oct 16 17:41 f1 Inherited from parent
Chapter 11 • Permissions and User and Group ID Numbers
414
$ mkdir d2 $ ls -ld d2 drwxr-sr-x
2 arnold
arnold
4096 Oct 16 17:51 d2
Make a directory Check permissions Group and setgid inherited
The ext2 and ext3 filesystems for GNU/Linux work as just shown. In addition they support special mount options, grpid and bsdgroups, which make the “use parent directory group” semantics the default. (The two names mean the same thing.) In other words, when these mount options are used, then parent directories do not have to have their setgid bits set. The opposite mount options are nogrpid and sysvgroups. This is the default behavior; however, the setgid bit is still honored if it’s present. (Here, too, the two names mean the same thing.) POSIX specifies that new files and directories inherit either the effective GID of the creating process or the group of the parent directory. However, implementations have to provide a way to make new directories inherit the group of the parent directory. Furthermore, the standard recommends that applications not rely on one behavior or the other, but in cases where it matters, applications should use chown() to force the ownership of the new file or directory’s group to the desired GID.
11.5.2 Directories and the Sticky Bit “Sherman, set the wayback machine for 1976.” —Mr. Peabody—
The sticky bit originated in the PDP-11 versions of Unix and was applied to regular executable files.6 This bit was applied to programs that were expected to be heavily used, such as the shell and the editor. When a program had this bit set, the kernel would keep a copy of the program’s executable code on the swap device, from which it could be quickly loaded into memory for reuse. (Loading from the filesystem took longer: The image on the swap device was stored in contiguous disk blocks, whereas the image in the filesystem might be spread all over the disk.) The executable images “stuck” to the swap device, hence the name. Thus, even if the program was not currently in use, it was expected that it would be in use again shortly when another user went to run it, so it would be loaded quickly.
6 Images come to mind of happy youthful programs, their faces and hands covered in chocolate.
11.6 Setting Real and Effective IDs
415
Modern systems have considerably faster disk and memory hardware than the PDP-11s of yore. They also use a technique called demand paging to load into memory only those parts of an executable program that are being executed. Thus, today, the sticky bit on a regular executable file serves no purpose, and indeed it has no effect. However, in Section 1.1.2, “Directories and Filenames,” page 6, we mentioned that the sticky bit on an otherwise writable directory prevents file removal from that directory, or file renaming within it, by anyone except the file’s owner, or root. Here is an example: $ ls -ld /tmp Show /tmp’s permissions drwxrwxrwt 19 root root 4096 Oct 20 14:04 /tmp $ cd /tmp Change there $ echo this is my file > arnolds-file Create a file $ ls -l arnolds-file Show its permissions -rw-r--r-1 arnold devel 16 Oct 20 14:14 arnolds-file $ su - miriam Change to another user Password: $ cd /tmp Change to /tmp $ rm arnolds-file Attempt to remove file rm: remove write-protected regular file `arnolds-file'? y rm is cautious rm: cannot remove `arnolds-file': Operation not permitted Kernel disallows removal
The primary purpose of this feature is exactly for directories such as /tmp, where multiple users wish to place their files. On the one hand, the directory needs to be world-writable so that anyone can create files in it. On the other hand, once it’s worldwritable, any user can remove any other user’s files! The directory sticky bit solves this problem nicely. Use ‘chmod +t’ to add the sticky bit to a file or directory: $ mkdir mytmp $ chmod a+wxt mytmp $ ls -ld mytmp drwxrwxrwt 2 arnold
Create directory Add all-write, sticky bits Verify result devel
4096 Oct 20 14:23 mytmp
Finally, note that the directory’s owner can also remove files, even if they don’t belong to him.
11.6 Setting Real and Effective IDs Things get interesting once a process has to change its UID and GID values. Setting the group set is straightforward. Changing real and effective UID and GID values around is more involved.
416
Chapter 11 • Permissions and User and Group ID Numbers
11.6.1 Changing the Group Set The setgroups() function installs a new group set: #include #include #include
Common
int setgroups(size_t size, const gid_t *list);
The size parameter indicates how many items there are in the list array. The return value is 0 if all went well, -1 with errno set otherwise. Unlike the functions for manipulating the real and effective UID and GID values, this function may only be called by a process running as root. This is one example of what POSIX terms a privileged operation; as such it’s not formally standardized by POSIX. setgroups() is used by any program that does a login to a system, /bin/login for console logins or /bin/sshd for remote logins with ssh.
such as
11.6.2 Changing the Real and Effective IDs Running with two different user IDs presents a challenge to the application programmer. There are things that a program may need to do when working with the effective UID, and other things that it may need to do when working using the real UID. For example, before Unix systems had job control, many programs provided shell escapes, that is, a way to run a command or interactive shell from within the current program. The ed editor is a good example of this: Typing a command line beginning with ! ran the rest of the line as a shell command. Typing ‘!sh’ gave you an interactive shell. (This still works—try it!) Suppose the hypothetical game program described earlier also provides a shell escape: the shell should be run as the real user, not the effective one. Otherwise, it again becomes trivial for the game player to directly edit the score file or do lots more worse things! Thus, there is a clear need to be able to change the effective UID to be the real UID. Furthermore, it’s helpful to be able to switch the effective UID back to what it was originally. (This is the reason for having a saved set-user ID in the first place; it becomes possible to regain the original privileges that the process had when it started out.) As with many Unix APIs, different systems solved the problem in different ways, sometimes by using the same API but with different semantics and sometimes by
11.6 Setting Real and Effective IDs
417
introducing different APIs. Delving into the historic details is only good for producing headaches, so we don’t bother. Instead, we look at what POSIX provides and how each API works. Furthermore, our discussion focuses on the real and effective UID values; the GID values work analogously, so we don’t bother to repeat the details for those system calls. The functions are as follows: POSIX
#include #include int seteuid(uid_t euid); int setegid(gid_t egid);
Set effective ID
int setuid(uid_t uid); int setgid(gid_t gid);
Set effective ID, if root, set all
int setreuid(uid_t ruid, uid_t euid); int setregid(gid_t rgid, gid_t egid);
BSD compatibility, set both
There are three sets of functions. The first two were created by POSIX: int seteuid(uid_t euid)
This function sets only the effective UID. A regular (non-root) user can only set the ID to one of the real, effective, or saved set-user ID values. Applications that will switch the effective UID around should use this function exclusively. A process with an effective UID of zero can set the effective UID to any value. Since it is also possible to set the effective UID to the saved set-user ID, the process can regain its root privileges with another call to seteuid(). int setegid(gid_t egid)
This function does for the effective group ID what seteuid() does for the effective user ID. The next set of functions offers the original Unix API for changing the real and effective UID and GID. Under the POSIX model, these functions are what a setuid-root program should use to make a permanent change of real and effective UID: int setuid(uid_t uid)
For a regular user, this function also sets only the effective UID. As with seteuid(), the effective UID may be set to any of the current real, effective, or saved set-user ID values. The change is not permanent; the effective UID can be changed to another value (from the same source set) with a subsequent call.
Chapter 11 • Permissions and User and Group ID Numbers
418
However, for root, this function sets all three of the real, effective, and saved setuser IDs to the given value. Furthermore, the change is permanent; the IDs cannot be changed back. (This makes sense: Once the saved set-user ID is changed, there isn’t a different ID to change back to.) int setgid(gid_t gid)
This function does for the effective group ID what setuid() does for the effective user ID. The same distinction between regular users and root applies. NOTE The ability to change the group ID hinges on the effective user ID. An effective GID of 0 has no special privileges.
Finally, POSIX provides two functions from 4.2 BSD for historical compatibility. It is best not to use these in new code. However, since you are likely to see older code which does use these functions, we describe them here. int setreuid(uid_t ruid, uid_t euid)
Sets the real and effective UIDs to the given values. A value of -1 for ruid or euid leaves the respective ID unchanged. (This is similar to chown(); see Section 5.5.1, “Changing File Ownership: chown(), fchown(), and lchown(),” page 155.) root is allowed to set both the real and the effective IDs to any value. According to POSIX, non-root users may only change the effective ID; it is “unspecified”
what happens if a regular user attempts to change the real UID. However, the GNU/Linux setreuid(2) manpage spells out the Linux behavior: The real UID may be set to either the real or effective UID, and the effective UID may be set to any of the real, effective, or saved set-user IDs. (For other systems, see the setreuid(2) manpage.) int setregid(gid_t rgid, gid_t egid)
Does for the real and effective group IDs what setreuid() does for the real and effective user ID. The same distinction between regular users and root applies. The saved set-user ID didn’t exist in the BSD model, so the idea behind setreuid() and setregid() was to make it simple to swap the real and effective IDs: setreuid(geteuid(), getuid());
/* swap real and effective */
11.6 Setting Real and Effective IDs
419
However, given POSIX’s adoption of the saved set-user ID model and the seteuid() and setegid() functions, the BSD functions should not be used in new code. Even the 4.4 BSD documentation marks these functions as obsolete, recommending seteuid()/setuid() and setegid()/setgid() instead.
11.6.3 Using the Setuid and Setgid Bits There are important cases in which a program running as root must irrevocably change all three of the real, effective, and saved set-user IDs to that of a regular user. The most obvious is the login program, which you use every time you log in to a GNU/Linux or Unix system (either directly, or remotely). There is a hierarchy of programs, as outlined in Figure 11.1. PID 1
init fork()/exec()
fork()/exec()
getty
getty
PID 523 ruid: 0 euid: 0
exec() PID 523 ruid: 0 euid: 0
login
open()/dup() setgroups()/setgid()/setuid() exec() shell
PID 523 ruid: 42 euid: 42
FIGURE 11.1 From init to getty to login to shell
The code for login is too complicated to be shown here, since it deals with a number of tasks that aren’t relevant to the current discussion. But we can outline the steps that happen at login time, as follows:
420
Chapter 11 • Permissions and User and Group ID Numbers
1. init is the primordial process. It has PID 1. All other processes are descended from it. The kernel handcrafts process 1 at boot time and runs init in it. It runs with both the real and effective UID set to zero, that is, as root. 2. init reads /etc/inittab, which, among other things, tells init on which hardware devices it should start a getty process. For each such device (such as the console, serial terminals, or virtual consoles on a GNU/Linux system), init forks a new process. This new process then uses exec() to run getty (“get tty,” that is, a terminal). On many GNU/Linux systems, this command is named mingetty. The program opens the device, resets its state, and prints the ‘login:’ prompt. 3. Upon reading a login name, getty execs login. The login program looks up the username in the password file, prompts for a password, and verifies the password. If they match, the login process continues. 4. login changes to the user’s home directory, sets up the initial environment, and then sets up the initial set of open files. It closes all file descriptors, opens the terminal, and uses dup() to copy the terminal’s file descriptor to 0, 1, and 2. This is where the already opened standard input, output, and error file descriptors come from. 5. login then uses setgroups() to set the supplemental group set, setgid() to set the real, effective, and saved set-group IDs to those of the user, and finally setuid() to set all three of the real, effective, and saved set-user IDs to those of the logging-in user. Note that the call to setuid() must come last so that the other two calls succeed. 6. Finally, login execs the user’s login shell. Bourne-style shells then read /etc/profile and $HOME/.profile, if those files exist. Finally, the shell prints a prompt. Note how one process changes its nature from system process to user process. Each child of init starts out as a copy of init. By using exec(), the same process does different jobs. By calling setuid() to change from root to a regular user, the process finally goes directly to work for the user. When you exit the shell (by CTRL-D or exit), the process simply dies. init then restarts the cycle, spawning a fresh getty, which prints a fresh ‘login:’ prompt.
11.7 Working with All Three IDs: getresuid() and setresuid() (Linux)
421
NOTE Open files remain open and usable, even after a process has changed any or all of its UIDs or GIDs. Thus, setuid programs should open any necessary files up front, change their IDs to those of the real user, and continue with the rest of their job without any extra privilege.
Table 11.1 summarizes the six standard functions for manipulating UID and GID values. TABLE 11.1 API summary for setting real and effective IDs
Function
Sets
Permanent
Regular user
Root
seteuid()
E
No
From R, E, S
Any value
setegid()
E
No
From R, E, S
Any value
setuid()
Root: R, E, S
Root: yes
From R, E
Any value
Other: E
Other: no
Root: R, E, S
Root: yes
From R, E
Any value
Other: E
Other: no
setreuid()
E, may set R
No
From R, E
Any value
setregid()
E, may set R
No
From R, E
Any value
setgid()
11.7 Working with All Three IDs: getresuid() and setresuid() (Linux) Linux provides additional system calls by which you can work directly with the real, effective, and saved user and group IDs: #include #include int getresuid(uid_t *ruid, uid_t *euid, uid_t *suid); int getresgid(gid_t *rgid, gid_t *egid, gid_t *sgid); int setresuid(uid_t ruid, uid_t euid, uid_t suid); int setresgid(gid_t rgid, gid_t egid, gid_t sgid);
The functions are as follows:
Linux
422
Chapter 11 • Permissions and User and Group ID Numbers
int getresuid(uid_t *ruid, uid_t *euid, uid_t *suid)
Retrieves the real, effective, and saved set-user ID values. The return value is 0 on success or -1 if an error, with errno indicating the problem. int getresgid(gid_t *rgid, gid_t *egid, gid_t *sgid)
Retrieves the real, effective, and saved set-group ID values. The return value is 0 on success or -1 if an error, with errno indicating the problem. int setresuid(uid_t ruid, uid_t euid, uid_t suid)
Sets the real, effective, and saved set-user ID values respectively. When a parameter value is -1, the corresponding UID is left unchanged. When the process is running as root, the parameters can be any arbitrary values. (However, using a nonzero value for euid causes a permanent, irrevocable loss of root privilege.) Otherwise, the parameters must be one of the current real, effective, or saved set-user ID values. int setresgid(gid_t rgid, gid_t egid, gid_t sgid)
Sets the real, effective, and saved set-group ID values respectively. When a parameter value is -1, the corresponding GID is left unchanged. This function is analogous to setresuid(). The setresuid() and setresgid() functions are particularly valuable because the semantics are clearly defined. A programmer knows exactly what the effect of the call will be. Furthermore, the calls are “all or nothing” operations: They either succeed completely, making the desired change, or fail completely, leaving the current situation as it was. This improves reliability since, again, it’s possible to be sure of exactly what happened.
11.8 Crossing a Security Minefield: Setuid root Real minefields are difficult, but not impossible, to cross. However, it’s not something to attempt lightly, without training or experience. So, too, writing programs that run setuid to root is a difficult task. There are many, many issues to be aware of, and almost anything can have unexpected security consequences. Such an endeavor should be undertaken carefully.
11.9 Suggested Reading
423
In particular, it pays to read up on Linux/Unix security issues and to invest time in learning how to write setuid root programs. If you dive straight into such a challenge having read this book and nothing else, rest assured that your system will be broken into, easily and immediately. It’s unlikely that either you or your customers will be happy. Here are a few guiding principles: • Do as little as possible as root. Use your super powers sparingly, only where they’re absolutely needed. • Design your program properly. Compartmentalize your program so that all of the root operations can be done up front, with the rest of the program running as a regular user. • When changing or dropping privileges, use setresuid() if you have it. Otherwise use setreuid(), since those two functions have the cleanest semantics. Use setuid() only when you want the change to be permanent. • Change from root to regular user in the proper order: set the group set and GID values first, and then the UID values. • Be especially careful with fork() and exec(); the real and effective UIDs are not changed across them unless you explicitly change them. • Consider using setgid permissions and a special group for your application. If that will work, it’ll save you much headache. • Consider throwing out the inherited environment. If you must keep some environment variables around, keep as few as possible. Be sure to provide reasonable values for the PATH and IFS environment variables. • Avoid execlp() and execvp(), which depend upon the value of the PATH environment variable (although this is less problematic if you’ve reset PATH yourself). These are just a few of the many tactics for traversing a danger zone notable for pitfalls, booby-traps, and landmines. See the next section for pointers to other sources of information.
11.9 Suggested Reading Unix (and thus GNU/Linux) security is a topic that requires knowledge and experience to handle properly. It has gotten only harder in the Internet Age, not easier.
Chapter 11 • Permissions and User and Group ID Numbers
424
1. Practical UNIX & Internet Security, 3rd edition, by Simson Garfinkel, Gene Spafford, and Alan Schwartz, O’Reilly & Associates, Sebastopol, CA, USA, 2003. ISBN: 0-596-00323-4. This is the standard book on Unix security. 2. Building Secure Software: How to Avoid Security Problems the Right Way, by John Viega and Gary McGraw. Addison-Wesley, Reading, Massachusetts, USA, 2001. ISBN: 0-201-72152-X. This is a good book on writing secure software and it includes how to deal with setuid issues. It assumes you are familiar with the basic Linux/Unix APIs; by the time you finish reading our book, you should be ready to read it. 3. “Setuid Demystified,” by Hao Chen, David Wagner, and Drew Dean. Proceedings of the 11th USENIX Security Symposium, August 5–9, 2002. http://www.cs.berkeley.edu/~daw/papers/setuid-usenix02.pdf. Garfinkel, Spafford, and Schwartz recommend reading this paper “before you even think about writing code that tries to save and restore privileges.” We most heartily agree with them.
11.10 Summary • The use of user and group ID values (UIDs and GIDs) to identify files and processes is what makes Linux and Unix into multiuser systems. Processes carry both real and effective UID and GID values, as well as a supplemental group set. It is generally the effective UID that determines how one process might affect another, and the effective UID, GID, and group set that are checked against a file’s permissions. Users with an effective UID of zero, known as root or the superuser, are allowed to do what they like; the system doesn’t apply permission checks to such a user. • The saved set-user ID and saved set-group ID concepts came from System V and have been adopted by POSIX with full support in GNU/Linux. Having these separate ID values makes it possible to easily and correctly swap real and effective UIDs (and GIDs) as necessary.
11.10 Summary
425
• Setuid and setgid programs create processes in which the effective and real IDs differ. The programs are marked as such with additional bits in the file permissions. The setuid and setgid bits must be added to a file after it is created. • getuid() and geteuid() retrieve the real and effective UID values, respectively, and getgid() and getegid() retrieve the real and effective GID values, respectively. getgroups() retrieves the supplemental group set and in a POSIX environment, can query the system as to how many members the group set contains. • The access() function does file permission checking as the real user, making it possible for setuid programs to check the real user’s permissions. Note that, often, examining the information as retrieved by stat() may not provide the full picture, given that the file may reside on a nonnative or network filesystem. • The GLIBC euidaccess() function is similar to access() but does the checking on the base of the effective UID and GID values. • The setgid and sticky bits, when applied to directories, introduce extra semantics. When a directory has its setgid bit on, new files in that directory inherit the directory’s group. New directories do also, and they automatically inherit the setting of the setgid bit. Without the setgid bit, new files and directories receive the effective GID of the creating process. The sticky bit on otherwise writable directories restricts file removal to the file’s owner, the directory’s owner, and to root. • The group set is changed with setgroups(). This function isn’t standardized by POSIX, but it exists on all modern Unix systems. Only root may use it. • Changing UIDs and GIDs is considerably involved. The semantics of various system calls have changed over the years. New applications that will change only their effective UID/GID should use seteuid() and setegid(). Non-root applications can also set their effective IDs with setuid() and setgid(). The setreuid() and setregid() calls from BSD were intended for swapping the UID and GID values; their use in new programs is discouraged. • Applications running as root can permanently change the real, effective, and saved ID values with setuid() and setgid(). One example of this is login, which has to change from a system program running as root to a nonprivileged login shell running as a regular user.
Chapter 11 • Permissions and User and Group ID Numbers
426
• The Linux setresuid() and setresgid() functions should be used when they’re available, since they provide the cleanest and most reliable behavior. • Writing setuid-root applications is not a task for a novice. If you need to do such a thing, read up on security issues first; the sources cited previously are excellent.
Exercises 1. Write a simple version of the id command. Its action is to print the user and group IDs, with the group names, to standard output. When the effective and real IDs are different, both are printed. For example: $ id uid=2076(arnold) gid=42(devel) groups=19(floppy),42(devel),2076(arnold)
Its usage is: id id id id
[ user ] -G [ -nr ] [ user ] -g [ -nr ] [ user ] -u [ -nr ] [ user ]
With user, that user’s information is displayed; otherwise, id prints the invoking user’s information. The options are as follows: -G
Print all the group values as numeric values only, no names.
-n
Print the name only, no numeric values. Applies to user and group values.
-g
Print just the effective GID.
-u
Print just the effective UID.
2. Write a simple program, named sume, that is setuid to yourself. It should prompt for a password (see getpass(3)), which for the purposes of this exercise, can be hardwired into the program’s source code. If the person running the program correctly enters the password, sume should exec a shell. Get another user to help you test it. 3. How do you feel about making sume available to your friends? To your fellow students or coworkers? To every user on your system?
12 General Library Interfaces — Part 2
In this chapter 12.1 Assertion Statements: assert()
page 428
12.2 Low-Level Memory: The memXXX() Functions
page 432
12.3 Temporary Files
page 436
12.4 Committing Suicide: abort()
page 445
12.5 Nonlocal Gotos
page 446
12.6 Pseudorandom Numbers
page 454
12.7 Metacharacter Expansions
page 461
12.8 Regular Expressions
page 471
12.9 Suggested Reading
page 480
12.10 Summary
page 481
Exercises
page 482
427
hapter 6, “General Library Interfaces — Part 1,” page 165, presented the first set of general-purpose library APIs. In a sense, those APIs support working with the fundamental objects that Linux and Unix systems manage: the time of day, users and groups for files, and sorting and searching.
C
This chapter is more eclectic; the APIs covered here are not particularly related to each other. However, all are useful for day-to-day Linux/Unix programming. Our presentation moves from simpler, more general APIs to more complicated and more specialized ones.
12.1 Assertion Statements: assert() An assertion is a statement you make about the state of your program at certain points in time during its execution. The use of assertions for programming was originally developed by C.A.R. Hoare.1 The general idea is part of “program verification”: That as you design and develop a program, you can show that it’s correct by making carefully reasoned statements about the effects of your program’s code. Often, such statements are made about invariants—facts about the program’s state that are supposed to remain true throughout the execution of a chunk of code. Assertions are particularly useful for describing two kinds of invariants: preconditions and postconditions: conditions that must hold true before and after, respectively, the execution of a code segment. A simple example of preconditions and postconditions is linear search: /* lsearch --- return index in array of value, or -1 if not found */ int lsearch(int *array, size_t size, int value) { size_t i; /* precondition: array != NULL */ /* precondition: size > 0 */ for (i = 0; i < size; i++) if (array[i] == value) return i; /* postcondition: i == size */ return -1; }
1 In his 1981 ACM Turing Award lecture, however, Dr. Hoare states that Alan Turing himself promoted this idea.
428
12.1 Assertion Statements: assert()
429
This example states the conditions using comments. But wouldn’t it be better to be able to test the conditions by using code? This is the job of the assert() macro: #include
ISO C
void assert(scalar expression);
When the scalar expression is false, the assert() macro prints a diagnostic message and exits the program (with the abort() function; see Section 12.4, “Committing Suicide: abort(),” page 445). ch12-assert.c provides the lsearch() function again, this time with assertions and a main() function: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
/* ch12-assert.c --- demonstrate assertions */ #include #include /* lsearch --- return index in array of value, or -1 if not found */ int lsearch(int *array, size_t size, int value) { size_t i; assert(array != NULL); assert(size > 0); for (i = 0; i < size; i++) if (array[i] == value) return i; assert(i == size); return -1; } /* main --- test out assertions */ int main(void) { #define NELEMS 4 static int array[NELEMS] = { 1, 17, 42, 91 }; int index; index = lsearch(array, NELEMS, 21); assert(index == -1); index = lsearch(array, NELEMS, 17); assert(index == 1);
Chapter 12 • General Library Interfaces — Part 2
430
37 38 39 40 41 42
index = lsearch(NULL, NELEMS, 10);
/* won't return */
printf("index = %d\n", index); return 0; }
When compiled and run, the assertion on line 12 “fires:” $ ch12-assert Run the program ch12-assert: ch12-assert.c:12: lsearch: Assertion `array != ((void *)0)' failed. Aborted (core dumped)
The message from assert() varies from system to system. For GLIBC on GNU/Linux, the message includes the program name, the source code filename and line number, the function name, and then the text of the failed assertion. (In this case, the symbolic constant NULL shows up as its macro expansion, ‘((void *)0)’.) The ‘Aborted (core dumped)’ message means that ch12-assert created a core file; that is, a snapshot of the process’s address space right before it died.2 This file can be used later, with a debugger; see Section 15.3, “GDB Basics,” page 570. Core file creation is a purposeful side effect of assert(); the assumption is that something went drastically wrong, and you’ll want to examine the process with a debugger to determine what. You can disable assertions by compiling your program with the command-line option ‘-DNDEBUG’. When this macro is defined before is included, the assert() macro expands into code that does nothing. For example: $ gcc -DNDEBUG=1 ch12-assert.c -o ch12-assert $ ch12-assert Segmentation fault (core dumped)
Compile with -DNDEBUG Run it What happened?
Here, we got a real core dump! We know that assertions were disabled; there’s no “failed assertion” message. So what happened? Consider line 15 of lsearch(), when called from line 37 of main(). In this case, the array variable is NULL. Accessing memory through a NULL pointer is an error. (Technically, the various standards leave as “undefined” what happens when you dereference a NULL pointer. Most modern systems do what GNU/Linux does; they kill the process by sending it a SIGSEGV signal; this in turn produces a core dump. This process is described in Chapter 10, “Signals,” page 347.) 2 As mentioned in Section 10.2, “Signal Actions,” page 348, some GNU/Linux distributions disable creation of core files. To reenable them, put the line ‘ulimit -S -c unlimited’ into your ~/.profile file.
12.1 Assertion Statements: assert()
431
This case raises an important point about assertions. Frequently, programmers mistakenly use assertions instead of runtime error checking. In our case, the test for ‘array != NULL’ should be a runtime check: if (array == NULL) return -1;
The test for ‘size > 0’ (line 13) is less problematic; if size is 0 or less than 0, the loop never executes and lsearch() (correctly) returns -1. (In truth, this assertion isn’t needed because the code correctly handles the case in which ‘size 0) { register char *p = buf; if (bytes_read == SAFE_READ_ERROR) { error (0, errno, "%s", file); exit_status = 1; break; } while ((p = memchr (p, '\n', (buf + bytes_read) - p))) { ++p; ++lines; } bytes += bytes_read; } }
The outer loop (lines 261–278) reads data blocks from the input file. The inner loop (lines 272–276) uses memchr() to find and count newline characters. The complicated expression ‘(buf + bytes_read) - p’ resolves to the number of bytes left between the current value of p and the end of the buffer. The comment on lines 259–260 needs some explanation. Briefly, modern systems can use characters that occupy more than one byte in memory and on disk. (This is discussed in a little more detail in Section 13.4, “Can You Spell That for Me, Please?”, page 521.) Thus, wc has to use different code if it’s distinguishing characters from bytes: This code deals with the counting-bytes case.
12.3 Temporary Files A temporary file is exactly what it sounds like: A file that while a program runs, holds data that isn’t needed once the program exits. An excellent example is the sort program. sort reads standard input if no files are named on the command line or if you use ‘-’ as the filename. Yet sort has to read all of its input data before it can output the sorted results. (Think about this a bit and you’ll see that it’s true.) While standard input is being read, the data must be stored somewhere until sort can sort it; this is the perfect use for a temporary file. sort also uses temporary files for storing intermediate sorted results.
12.3 Temporary Files
437
Amazingly, there are five different functions for creating temporary files. Three of them work by creating strings representing (supposedly) unique filenames. As we’ll see, these should generally be avoided. The other two work by creating and opening the temporary file; these functions are preferred.
12.3.1 Generating Temporary Filenames (Bad) There are three functions whose purpose is to create the name of a unique, nonexistent file. Once you have such a filename, you can use it to create a temporary file. Since the name is unique, you’re “guaranteed” exclusive use of the file. Here are the function declarations: #include char *tmpnam(char *s); char *tempnam(const char *dir, const char *pfx); char *mktemp(char *template);
ISO C XSI ISO C
The functions all provide different variations of the same theme: They fill in or create a buffer with the path of a unique temporary filename. The file is unique in that the created name doesn’t exist as of the time the functions create the name and return it. The functions work as follows: char *tmpnam(char *s)
Generates a unique filename. If s is not NULL, it should be at least L_tmpnam bytes in size and the unique name is copied into it. If s is NULL, the name is generated in an internal static buffer that can be overwritten on subsequent calls. The directory prefix of the path will be P_tmpdir. Both P_tmpdir and L_tmpnam are defined in . char *tempnam(const char *dir, const char *pfx) Like tmpnam(), lets you specify the directory prefix. If dir is NULL, P_tmpdir is used. The pfx argument, if not NULL, specifies up to five characters to use as the
leading characters of the filename. allocates storage for the filenames it generates. The returned pointer can be used later with free() (and should be if you wish to avoid memory leaks). tempnam()
char *mktemp(char *template)
Generates unique filenames based on a template. The last six characters of template must be ‘XXXXXX’; these characters are replaced with a unique suffix.
Chapter 12 • General Library Interfaces — Part 2
438
NOTE The template argument to mktemp() is overwritten in place. Thus, it should not be a string constant. Many pre-Standard C compilers put string constants into the data segment, along with regular global variables. Although defined as constants in the source code, they were writable; thus, code like the following was not uncommon: /* Old-style code: don't do this. */ char *tfile = mktemp("/tmp/myprogXXXXXX");
… use tfile …
On modern systems, such code will likely fail; string constants nowadays find themselves in read-only segments of memory.
Using these functions is quite straightforward. The file ch12-mktemp.c demonstrates mktemp(); changes to use the other functions are not difficult: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32
/* ch12-mktemp.c --- demonstrate naive use of mktemp(). Error checking omitted for brevity */ #include #include #include
/* for open flags */ /* for PATH_MAX */
int main(void) { static char template[] = "/tmp/myfileXXXXXX"; char fname[PATH_MAX]; static char mesg[] = "Here's lookin' at you, kid!\n"; /* beats "hello, world" */ int fd; strcpy(fname, template); mktemp(fname); /* RACE CONDITION WINDOW OPENS */ printf("Filename is %s\n", fname); /* RACE CONDITION WINDOW LASTS TO HERE */ fd = open(fname, O_CREAT|O_RDWR|O_TRUNC, 0600); write(fd, mesg, strlen(mesg)); close(fd); /* unlink(fname); */ return 0; }
12.3 Temporary Files
439
The template variable (line 10) defines the filename template; the ‘XXXXXX’ will be replaced with a unique value. Line 16 copies the template into fname, which isn’t const: It can be modified. Line 18 calls mktemp() to generate the filename, and line 21 prints it so we can see what it is. (We explain the comments on lines 19 and 23 shortly.) Line 25 opens the file, creating it if necessary. Line 26 writes the message in mesg, and line 27 closes the file. In a program in which the file should be removed when we’re done with it, line 29 would not be commented out. (Sometimes, a temporary file should not be unlinked; for example, if the file will be renamed once it’s completely written.) We’ve commented it out so that we can run this program and look at the file afterwards. Here’s what happens when the program runs: $ ch12-mktemp Filename is /tmp/myfileQES4WA $ cat /tmp/myfileQES4WA Here's lookin' at you, kid! $ ls -l /tmp/myfileQES4WA -rw------1 arnold devel $ rm /tmp/myfileQES4WA $ ch12-mktemp Filename is /tmp/myfileic7xCy $ cat /tmp/myfileic7xCy Here's lookin' at you, kid! $ ls -l /tmp/myfileic7xCy -rw------1 arnold devel
Run the program Filename printed Contents are what we expect So are owner and permissions 28 Sep 18 09:27 /tmp/myfileQES4WA
Remove it Is same filename reused? No. That’s good Check contents again Check owner and permissions again 28 Sep 18 09:28 /tmp/myfileic7xCy
Everything seems to be working fine. mktemp() gives back a unique name, ch12-mktemp creates the file with the right permissions, and the contents are what’s expected. So what’s the problem with all of these functions? Historically, mktemp() used a simple, predictable algorithm to generate the replacement characters for the ‘XXXXXX’ in the template. Furthermore, the interval between the time the filename is generated and the time the file itself is created creates a race condition. How? Well, Linux and Unix systems use time slicing, a technique that shares the processor among all the executing processes. This means that, although a program appears to be running all the time, in actuality, there are times when processes are sleeping, that is, waiting to run on the processor. Now, consider a professor’s program for tracking student grades. Both the professor and a malicious student are using a heavily loaded, multiuser system at the same time.
Chapter 12 • General Library Interfaces — Part 2
440
The professor’s program uses mktemp() to create temporary files, and the student, having in the past watched the grading program create and remove temporary files, has figured out the algorithm that mktemp() uses. (The GLIBC version doesn’t have this problem, but not all systems use GLIBC!) Figure 12.2 illustrates the race condition and how the student takes advantage of it. 1
2
3
4
Time Grader:
Student:
mktemp() gives name
OS stops program Creates file with the same name and extra link
Opens file writes data Saves copy of grades
FIGURE 12.2 Race condition with mktemp()
Here’s what happened. 1. The grading program uses mktemp() to generate a filename. Upon return from mktemp(), the race condition window is now open (line 19 in ch12-mktemp.c). 2. The kernel stops the grader so that other programs on the system can run. This happens before the call to open(). While the grader is stopped, the student creates the file with the same name mktemp() returned to the grader program. (Remember, the algorithm was easy to figure out.) The student creates the file with an extra link to it, so that when the grading program unlinks the file, it will still be available for perusal. 3. The grader program now opens the file and writes data to it. The student created the file with -rw-rw-rw- permissions, so this isn’t a problem. 4. When the grader program is finished, it unlinks the temporary file. However, the student still has a copy. For example, there may be a profit opportunity to sell classmates their grades in advance. Our example is simplistic; besides just stealing the grade data, a clever (if immoral) student might be able to change the data in place. If the professor doesn’t double-check his program’s results, no one would be the wiser.
12.3 Temporary Files
441
NOTE We do not recommend doing any of this! If you are a student, don’t try any of this. First and foremost, it is unethical. Second, it’s likely to get you kicked out of school. Third, your professors are probably not so naive as to have used mktemp() to code their programs. The example is for illustration only!
For the reasons given, and others, all three functions described in this section should never be used. They exist in POSIX and in GLIBC only to support old programs that were written before the dangers of these routines were understood. To this end, GNU/Linux systems generate a warning at link time: $ cc ch12-mktemp.c -o ch12-mktemp Compile the program /tmp/cc1XCvD9.o(.text+0x35): In function `main': : the use of `mktemp' is dangerous, better use `mkstemp'
(We cover mkstemp() in the next subsection.) Should your system not have mkstemp(), think about how you might use these interfaces to simulate it. (See also “Exercises” for Chapter 12, page 482, at the end of the chapter.)
12.3.2 Creating and Opening Temporary Files (Good) There are two functions that don’t have race condition problems. One is intended for use with the library: #include
ISO C
FILE *tmpfile(void);
The second function is for use with the file-descriptor-based system calls: #include
XSI
int mkstemp(char *template);
tmpfile() returns a FILE * value representing a unique, open temporary file. The file is opened in "w+b" mode. The w+ means “open for reading and writing, truncate the file first,” and the b means binary mode, not text mode. (There’s no difference on
a GNU/Linux or Unix system, but there is on other systems.) The file is automatically deleted when the file pointer is closed; there is no way to get to the file’s name to save its contents. The program in ch12-tmpfile.c demonstrates tmpfile():
Chapter 12 • General Library Interfaces — Part 2
442
/* ch12-tmpfile.c --- demonstrate tmpfile(). Error checking omitted for brevity */ #include int main(void) { static char mesg[] = "Here's lookin' at you, kid!"; FILE *fp; char buf[BUFSIZ];
/* beats "hello, world" */
fp = tmpfile(); fprintf(fp, "%s", mesg); fflush(fp);
/* Get temp file */ /* Write to it */ /* Force it out */
rewind(fp); fgets(buf, sizeof buf, fp);
/* Move to front */ /* Read contents */
printf("Got back \n", buf);
/* Print retrieved data */
fclose(fp); return 0;
/* Close file, goes away */ /* All done */
}
The returned FILE * value is no different from any other FILE * returned by When run, the results are what’s expected:
fopen().
$ ch12-tmpfile Got back
We saw earlier that the GLIBC authors recommend the use of mkstemp() function: $ cc ch12-mktemp.c -o ch12-mktemp Compile the program /tmp/cc1XCvD9.o(.text+0x35): In function `main': : the use of `mktemp' is dangerous, better use `mkstemp'
This function is similar to mktemp() in that it takes a filename ending in ‘XXXXXX’ and replaces those characters with a unique suffix to create a unique filename. However, it goes one step further. It creates and opens the file. The file is created with mode 0600 (that is, -rw-------). Thus, only the user running the program can access the file. Furthermore, and this is what makes mkstemp() more secure, the file is created using the O_EXCL flag, which guarantees that the file doesn’t exist and keeps anyone else from opening the file. The return value is an open file descriptor that can be used for reading and writing. The pathname now stored in the buffer passed to mkstemp() should be used to remove the file when you’re done. This is all demonstrated in ch12-mkstemp.c, which is a straightforward modification of ch12-tmpfile.c:
12.3 Temporary Files
443
/* ch12-mkstemp.c --- demonstrate mkstemp(). Error checking omitted for brevity */ #include #include /* for open flags */ #include /* for PATH_MAX */ int main(void) { static char template[] = "/tmp/myfileXXXXXX"; char fname[PATH_MAX]; static char mesg[] = "Here's lookin' at you, kid!\n"; /* beats "hello, world" */ int fd; char buf[BUFSIZ]; int n; strcpy(fname, template); fd = mkstemp(fname); printf("Filename is %s\n", fname);
/* Copy template */ /* Create and open temp file */ /* Print it for information */
write(fd, mesg, strlen(mesg));
/* Write something to file */
lseek(fd, 0L, SEEK_SET); n = read(fd, buf, sizeof(buf)); printf("Got back: %.*s", n, buf);
/* Rewind to front */ /* Read data back; NOT '\0' terminated! */ /* Print it out for verification */
close(fd); unlink(fname);
/* Close file */ /* Remove it */
return 0; }
When run, the results are as expected: $ ch12-mkstemp Filename is /tmp/myfileuXFW1N Got back: Here's lookin' at you, kid!
12.3.3 Using the TMPDIR Environment Variable Many standard utilities pay attention to the TMPDIR environment variable, using the directory it names as the place in which to put their temporary files. If TMPDIR isn’t set, then the default directory for temporary files is usually /tmp, although most modern systems have a /var/tmp directory as well. /tmp is usually cleared of all files and directories by administrative shell scripts at system startup. Many GNU/Linux systems provide a directory /dev/shm that uses the tmpfs filesystem type:
Chapter 12 • General Library Interfaces — Part 2
444
$ df Filesystem /dev/hda2 /dev/hda5 none
1K-blocks 6198436 61431520 256616
Used Available Use% Mounted on 5136020 747544 88% / 27720248 30590648 48% /d 0 256616 0% /dev/shm
The tmpfs filesystem type provides a RAM disk: a portion of memory used as if it were a disk. Furthermore, the tmpfs filesystem uses the Linux kernel’s virtual memory mechanisms to allow it to grow beyond a fixed size limit. If you have lots of RAM in your system, this approach can provide a noticeable speedup. To test performance we started with the /usr/share/dict/linux.words file, which is a sorted list of correctly spelled words, one per line. We then randomized this file so that it was no longer sorted and created a larger file containing 500 copies of the random version of the file: $ ls -l /tmp/randwords.big -rw-r--r-1 arnold devel $ wc -l /tmp/randwords.big 22713500 /tmp/randwords.big
Show size 204652500 Sep 18 16:02 /tmp/randwords.big
How many words? Over 22 million!
We then sorted the file, first using the default /tmp directory, and then with TMPDIR set to /dev/shm:5 $ time sort /tmp/randwords.big > /dev/null
Use real files
real 1m32.566s user 1m23.137s sys 0m1.740s $ time TMPDIR=/dev/shm sort /tmp/randwords.big > /dev/null
Use RAM disk
real user sys
1m28.257s 1m18.469s 0m1.602s
Interestingly, using the RAM disk was only marginally faster than using regular files. (On some further tests, it was actually slower!) We conjecture that the kernel’s buffer cache (see Section 4.6.2, “Creating Files with creat(),” page 109) comes into the picture, quite effectively speeding up file I/O.6
5 This use of /dev/shm is really an abuse of it; it’s intended for use in implementing shared memory, not for use
as a RAM disk. Nevertheless, it’s helpful for illustrating our point. 6 Our system has 512 megabytes of RAM, which to old fogies like the author seems like a lot. However, RAM prices
have fallen, and systems with one or more gigabytes of RAM are not uncommon, at least for software developers.
12.4 Committing Suicide: abort()
445
The RAM disk has a significant disadvantage: It is limited by the amount of swap space configured on your system.7 When we tried to sort a file containing 1,000 copies of the randomized words file, the RAM disk ran out of room, whereas the regular sort finished successfully. Using TMPDIR for your own programs is straightforward. We offer the following outline: const char template[] = "myprog.XXXXXX"; char *tmpdir, *tfile; size_t count; int fd; if ((tmpdir = getenv("TMPDIR")) == NULL) tmpdir = "/tmp";
Use TMPDIR value if there Otherwise, default to /tmp
count = strlen(tmpdir) + strlen(template) + 2; tfile = (char *) malloc(count); if (tfile == NULL) /* recover */
Compute size of filename Allocate space for it Check for error
sprintf(tfile, "%s/%s", tmpdir, template); fd = mkstemp(tfile);
Create final template Create and open file
… use tempfile via fd … close(fd); unlink(tfile); free(tfile);
Clean up
Depending on your application’s needs, you may wish to unlink the file immediately after opening it instead of doing so as part of the cleanup.
12.4 Committing Suicide: abort() There are times when a program just can’t continue. Generally, the best thing to do is to generate an error message and call exit(). However, particularly for errors that are likely to be programming problems, it’s helpful to not just exit but also to produce a core dump, which saves the state of the running program in a file for later examination with a debugger. This is the job of the abort() function: #include
ISO C
void abort(void);
7 Swap space consists of one or more dedicated chunks of disk, used to hold the parts of executing processes that
are not currently in memory.
Chapter 12 • General Library Interfaces — Part 2
446
The abort() function sends a SIGABRT signal to the process itself. This happens even if SIGABRT is blocked or ignored. The normal action for SIGABRT, which is to produce a core dump, then takes place. An example of abort() in action is the assert() macro described at the beginning of this chapter. When assert() finds that its expression is false, it prints an error message and then calls abort() to dump core. According to the C standard, it is implementation defined whether or not abort() does any cleanup actions. Under GNU/Linux it does: All FILE * streams are closed before the program exits. Note, however, that nothing is done for open files that use the file descriptor-based system calls. (Nothing needs to be done if all that are open are files or pipes. Although we don’t discuss it, file descriptors are also used for network connections, and not closing them cleanly is poor practice.)
12.5 Nonlocal Gotos “Go directly to jail. Do not pass GO. Do not collect $200.” —Monopoly—
You undoubtedly know what a goto is: a transfer of control flow to a label somewhere else within the current function. goto statements, when used sparingly, can contribute to the readability and correctness of a function. (For example, when all error checks use a goto to a label at the end of a function, such as clean_up, the code at that label then cleans things up [closing files, etc.] and returns.) When used poorly, goto statements can lead to so-called spaghetti code, the logic of which becomes impossible to follow. The goto statement is constrained by the C language to jump to a label within the current function. Many languages in the Algol family, such as Pascal, allow a goto to “jump out” of a nested function into an earlier calling function. However in C, there is no way, within the syntax of the language itself, to jump to a location in a different function, even a calling one. Such a jump is termed a nonlocal goto. Why is a nonlocal goto useful? Consider an interactive program that reads commands and processes them. Suppose the user starts a long-running task, gets frustrated or changes his mind about doing the task, and then presses CTRL-C to generate a SIGINT signal. When the signal handler runs, it can jump back to the start of the main
12.5 Nonlocal Gotos
447
read-commands-and-process-them loop. The ed line editor provides a straightforward example of this: $ ed -p '> ' sayings sayings: No such file or directory > a Hello, world Don't panic ^C ? > 1,$p Hello, world Don't panic > w 25 > q
Start ed, use ‘> ’ as prompt Append text Generate SIGINT The ‘‘one size fits all’’ error message ed returns to command loop ‘1,$p’ prints all the lines Save file All done
Internally, ed sets up a return point before the command loop, and the signal handler then does a nonlocal goto back to the return point.
12.5.1 Using Standard Functions: setjmp() and longjmp() Nonlocal gotos are accomplished with the setjmp() and longjmp() functions. These functions come in two flavors. The traditional routines are defined by the ISO C standard: ISO C
#include int setjmp(jmp_buf env); void longjmp(jmp_buf env, int val);
The jmp_buf type is typedef’d in . setjmp() saves the current “environment” in env. env is typically a global or file-level static variable so that it can be used from a called function. This environment includes whatever information is necessary for jumping to the location at which setjmp() is called. The contents of a jmp_buf are by nature machine dependent; thus, jmp_buf is an opaque type: something you use without knowing what’s inside it. setjmp() returns 0 when it is called to save the current environment in a jmp_buf. It returns nonzero when a nonlocal jump is made using the environment: jmp_buf command_loop;
At global level
… then in main() … if (setjmp(command_loop) == 0) ; else printf("?\n");
… now start command loop …
State saved OK, proceed on We get here via nonlocal goto ed’s famous message
Chapter 12 • General Library Interfaces — Part 2
448
longjmp() makes the jump. The first parameter is a jmp_buf that must have been initialized by setjmp(). The second is an integer nonzero value that setjmp() returns in the original environment. This is so that code such as that just shown can distinguish between setting the environment and arriving by way of a nonlocal jump.
The C standard states that even if longjmp() is called with a second argument of still returns nonzero. In such a case, it instead returns 1.
0, setjmp()
The ability to pass an integer value and have that come back from the return of setjmp() is useful; it lets user-level code distinguish the reason for the jump. For instance, gawk uses this capability to handle the break and continue statements inside loops. (The awk language is deliberately similar to C in its syntax for loops, with while, do-while, and for loops, and break and continue.) The use of setjmp() looks like this (from eval.c in the gawk 3.1.3 distribution): 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527
case Node_K_while: PUSH_BINDING(loop_tag_stack, loop_tag, loop_tag_valid); stable_tree = tree; while (eval_condition(stable_tree->lnode)) { INCREMENT(stable_tree->exec_count); switch (setjmp(loop_tag)) { case 0: /* normal non-jump */ (void) interpret(stable_tree->rnode); break; case TAG_CONTINUE: /* continue statement */ break; case TAG_BREAK: /* break statement */ RESTORE_BINDING(loop_tag_stack, loop_tag, loop_tag_valid); return 1; default: cant_happen(); } } RESTORE_BINDING(loop_tag_stack, loop_tag, loop_tag_valid); break;
This code fragment represents a while loop. Line 508 manages nested loops by means of a stack of saved jmp_buf variables. Lines 511–524 run the while loop (using a C while loop!). Line 511 tests the loop’s condition. If it’s true, line 513 does a switch on the setjmp() return value. If it’s 0 (lines 514–516), then line 515 runs the statement’s body. However, when setjmp() returns either TAG_BREAK or TAG_CONTINUE,
12.5 Nonlocal Gotos
449
the switch statement handles them appropriately (lines 517–518 and 519–521, respectively). An awk-level break statement passes TAG_BREAK to longjmp(), and the awk-level continue passes TAG_CONTINUE. Again, from eval.c, with some irrelevant details omitted: 657 658 675 676 677 678 679 696 697
case Node_K_break: INCREMENT(tree->exec_count); ... longjmp(loop_tag, TAG_BREAK); break; case Node_K_continue: INCREMENT(tree->exec_count); ... longjmp(loop_tag, TAG_CONTINUE); break;
You can think of setjmp() as placing the label, and longjmp() as the goto, with the extra advantage of being able to tell where the code “came from” (by the return value).
12.5.2 Handling Signal Masks: sigsetjmp() and siglongjmp() For historical reasons that would most likely bore you to tears, the 1999 C standard is silent about the effect of setjmp() and longjmp() on the state of a process’s signals, and POSIX states explicitly that their effect on the process signal mask (see Section 10.6, “POSIX Signals,” page 367) is undefined. In other words, if a program changes its process signal mask between the first call to setjmp() and a call to longjmp(), what is the state of the process signal mask after the longjmp()? Is it the mask in effect when setjmp() was first called? Or is it the current mask? POSIX says explicitly “there’s no way to know.” To make handling of the process signal mask explicit, POSIX introduced two additional functions and one typedef: #include int sigsetjmp(sigjmp_buf env, int savesigs); void siglongjmp(sigjmp_buf env, int val);
POSIX Note: sigjmp_buf, not jmp_buf!
Chapter 12 • General Library Interfaces — Part 2
450
The main difference is the savesigs argument to sigsetjmp(). If nonzero, then the current set of blocked signals is saved in env, along with the rest of the environment that would be saved by setjmp(). A siglongjmp() with an env where savesigs was true restores the saved process signal mask. NOTE POSIX is also clear that if savesigs is zero (false), it’s undefined whether the process signal mask is saved and restored, just like setjmp()/longjmp(). This in turn implies that if you’re going to use ‘sigsetjmp(env, 0)’ you may as well not bother: The whole point is to have control over saving and restoring the process signal mask!
12.5.3 Observing Important Caveats There are several technical caveats to be aware of: First, because the environment saving and restoring can be messy, machine-dependent tasks, setjmp() and longjmp() are allowed to be macros. Second, the C standard limits the use of setjmp() to the following situations: • As the sole controlling expression of a loop or condition statement (if, switch). • As one operand of a comparison expression (==, "); fflush(stdout);
Chapter 13 • Internationalization and Localization
492
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39
while (fgets(buf, sizeof buf, stdin) != NULL) { locale[0] = '\0'; count = sscanf(buf, "%s %s %s", left, right, locale); if (count < 2) break; if (*locale) { setlocale(LC_ALL, locale); strcpy(curloc, locale); } printf("%s: strcmp(\"%s\", \"%s\") is %d\n", curloc, left, right, strcmp(left, right)); printf("%s: strcoll(\"%s\", \"%s\") is %d\n", curloc, left, right, strcoll(left, right)); printf("\n--> "); fflush(stdout); } exit(0); }
The program reads input lines, which consist of two words to compare and, optionally, a locale to use for the comparison. If the locale is given, that becomes the locale for subsequent entries. It starts out with whatever locale is set in the environment. The curloc array saves the current locale for printing results; left and right are the left- and right-hand words to compare (lines 10–11). The main part of the program is a loop (lines 19–36) that reads lines and does the work. Lines 20–23 split up the input line. locale is initialized to the empty string, in case a third value isn’t provided. Lines 25–28 set the new locale if there is one. Lines 30–33 print the comparison results, and line 35 prompts for more input. Here’s a demonstration: $ ch13-compare --> ABC abc C: strcmp("ABC", "abc") is -1 C: strcoll("ABC", "abc") is -1
Run the program Enter two words Program started in "C" locale Identical results in "C" locale
--> ABC abc en_US en_US: strcmp("ABC", "abc") is -1 en_US: strcoll("ABC", "abc") is 2
Same words, "en_US" locale strcmp() results don’t change strcoll() results do!
--> ABC abc en_US.UTF-8 en_US.UTF-8: strcmp("ABC", "abc") is -1 en_US.UTF-8: strcoll("ABC", "abc") is 6
Same words, "en_US.UTF-8" locale
--> junk JUNK en_US.UTF-8: strcmp("junk", "JUNK") is 1 en_US.UTF-8: strcoll("junk", "JUNK") is -6
New words Previous locale used
Different value, still positive
13.2 Locales and the C Library
493
This program clearly demonstrates the difference between strcmp() and strcoll(). Since strcmp() works in accordance with the numeric character values, it always returns the same result. strcoll() understands collation issues, and its result varies according to the locale. We see that in both en_US locales, the uppercase letters come after the lowercase ones. NOTE Locale-specific string collation is also an issue in regular-expression matching. Regular expressions allow character ranges within bracket expressions, such as ‘[a-z]’ or ‘["-/]’. The exact meaning of such a construct (the characters numerically between the start and end points, inclusive) is defined only for the "C" and "POSIX" locales. For non-ASCII locales, a range such as ‘[a-z]’ can also match uppercase letters, not just lowercase ones! The range ‘["-/]’ is valid in ASCII, but not in "en_US.UTF-8". The long-term most portable solution is to use POSIX character classes, such as ‘[[:lower:]]’ and ‘[[:punct:]]’. If you find yourself needing to use range expressions on systems that are locale-aware and on older systems that are not, but without having to change your program, the solution is to use brute force and list each character individually within the brackets. It isn’t pretty, but it works.
Locale-based collation is potentially expensive. If you expect to be doing lots of comparisons, where at least one of the strings will not change or where string values will be compared against each other multiple times (such as in sorting a list), then you should consider using the strxfrm() function to convert your strings to versions that can be used with strcmp(). The strxfrm() function is declared as follows: #include
ISO C
size_t strxfrm(char *dest, const char *src, size_t n);
The idea is that strxfrm() transforms the first n characters of src, placing them into dest. The return value is the number of characters necessary to hold the transformed characters. If this is more than n, then the contents of dest are “indeterminate.” The POSIX standard explicitly allows n to be zero and dest to be NULL. In this case, strxfrm() returns the size of the array needed to hold the transformed version of src (not including the final '\0' character). Presumably, this value would then be used with malloc() for creating the dest array or for checking the size against a predefined
494
Chapter 13 • Internationalization and Localization
array bound. (When doing this, obviously, src must have a terminating zero byte.) This fragment illustrates how to use strxfrm(): #define STRBUFSIZE ... char s1[STRBUFSIZE], s2[STRBUFSIZE]; char s1x[STRBUFSIZE], s2x[STRBUFSIZE]; size_t len1, len2; int cmp;
Original strings Transformed copies
… fill in s1 and s2 … len1 = strlen(s1); len2 = strlen(s2); if (strxfrm(s1x, s1, len1) >= STRBUFSIZE || strxfrm(s2x, s2, len2) >= STRBUFSIZE) /* too big, recover */ cmp = strcmp(s1x, s2x); if (cmp == 0) /* equal */ else if (cmp < 0) /* s1 < s2 */ else /* s1 > s2 */
For one-time comparisons, it is probably faster to use strcoll() directly. But if strings will be compared multiple times, then using strxfrm() once and strcmp() on the transformed values will be faster. There are no locale-aware collation functions that correspond to strncmp() or strcasecmp().
13.2.4 Low-Level Numeric and Monetary Formatting: localeconv() Correctly formatting numeric and monetary values requires a fair amount of lowlevel information. Said information is available in the struct lconv, which is retrieved with the localeconv() function: #include
ISO C
struct lconv *localeconv(void);
Similarly to the ctime() function, this function returns a pointer to internal static data. You should make a copy of the returned data since subsequent calls could return different values if the locale has been changed. Here is the struct lconv (condensed slightly), direct from GLIBC’s :
13.2 Locales and the C Library
495
struct lconv { /* Numeric (non-monetary) information. */ char *decimal_point; /* Decimal point character. */ char *thousands_sep; /* Thousands separator. */ /* Each element is the number of digits in each group; elements with higher indices are farther left. An element with value CHAR_MAX means that no further grouping is done. An element with value 0 means that the previous element is used for all groups farther left. */ char *grouping; /* Monetary information. */ /* First three chars are a currency symbol from ISO 4217. Fourth char is the separator. Fifth char is '\0'. */ char *int_curr_symbol; char *currency_symbol; /* Local currency symbol. */ char *mon_decimal_point; /* Decimal point character. */ char *mon_thousands_sep; /* Thousands separator. */ char *mon_grouping; /* Like `grouping' element (above). char *positive_sign; /* Sign for positive values. */ char *negative_sign; /* Sign for negative values. */ char int_frac_digits; /* Int'l fractional digits. */ char frac_digits; /* Local fractional digits. */ /* 1 if currency_symbol precedes a positive value, 0 if succeeds. char p_cs_precedes; /* 1 iff a space separates currency_symbol from a positive value. char p_sep_by_space; /* 1 if currency_symbol precedes a negative value, 0 if succeeds. char n_cs_precedes; /* 1 iff a space separates currency_symbol from a negative value. char n_sep_by_space; /* Positive and negative sign positions: 0 Parentheses surround the quantity and currency_symbol. 1 The sign string precedes the quantity and currency_symbol. 2 The sign string follows the quantity and currency_symbol. 3 The sign string immediately precedes the currency_symbol. 4 The sign string immediately follows the currency_symbol. */ char p_sign_posn; char n_sign_posn; /* 1 if int_curr_symbol precedes a positive value, 0 if succeeds. char int_p_cs_precedes; /* 1 iff a space separates int_curr_symbol from a positive value. char int_p_sep_by_space; /* 1 if int_curr_symbol precedes a negative value, 0 if succeeds. char int_n_cs_precedes; /* 1 iff a space separates int_curr_symbol from a negative value. char int_n_sep_by_space;
*/
*/ */ */ */
*/ */ */ */
Chapter 13 • Internationalization and Localization
496
/* Positive and negative sign positions: 0 Parentheses surround the quantity and int_curr_symbol. 1 The sign string precedes the quantity and int_curr_symbol. 2 The sign string follows the quantity and int_curr_symbol. 3 The sign string immediately precedes the int_curr_symbol. 4 The sign string immediately follows the int_curr_symbol. */ char int_p_sign_posn; char int_n_sign_posn; };
The comments make it fairly clear what’s going on. Let’s look at the first several fields in the struct lconv: decimal_point
The decimal point character to use. In the United States and other Englishspeaking countries, it’s a period, but many countries use a comma. thousands_sep
The character to separate each 3 digits in a value. grouping
An array of single-byte integer values. Each element indicates how many digits to group. As the comment says, CHAR_MAX means no further grouping should be done, and 0 means reuse the last element. (We show some sample code later in the chapter.) int_curr_symbol
This is the international symbol for the local currency. For example, ‘USD’ for U.S. dollars. currency_symbol
This is the local symbol for the local currency. For example, $ for U.S. dollars. mon_decimal_point, mon_thousands_sep, mon_grouping
These correspond to the earlier fields, providing the same information, but for monetary amounts. Most of the rest of the values are not useful for day-to-day programming. The following program, ch13-lconv.c, prints some of these values, to give you a feel for what kind of information is available:
13.2 Locales and the C Library
497
/* ch13-lconv.c --- show some of the components of the struct lconv */ #include #include #include int main(void) { struct lconv l; int i; setlocale(LC_ALL, ""); l = *localeconv(); printf("decimal_point = [%s]\n", l.decimal_point); printf("thousands_sep = [%s]\n", l.thousands_sep); for (i = 0; l.grouping[i] != 0 && l.grouping[i] != CHAR_MAX; i++) printf("grouping[%d] = [%d]\n", i, l.grouping[i]); printf("int_curr_symbol = [%s]\n", l.int_curr_symbol); printf("currency_symbol = [%s]\n", l.currency_symbol); printf("mon_decimal_point = [%s]\n", l.mon_decimal_point); printf("mon_thousands_sep = [%s]\n", l.mon_thousands_sep); printf("positive_sign = [%s]\n", l.positive_sign); printf("negative_sign = [%s]\n", l.negative_sign); }
When run with different locales, not surprisingly we get different results: $ LC_ALL=en_US ch13-lconv decimal_point = [.] thousands_sep = [,] grouping[0] = [3] grouping[1] = [3] int_curr_symbol = [USD ] currency_symbol = [$] mon_decimal_point = [.] mon_thousands_sep = [,] positive_sign = [] negative_sign = [-]
Results for the United States
$ LC_ALL=it_IT ch13-lconv decimal_point = [.] thousands_sep = [] int_curr_symbol = [] currency_symbol = [] mon_decimal_point = [] mon_thousands_sep = [] positive_sign = [] negative_sign = []
Results for Italy
Chapter 13 • Internationalization and Localization
498
Note how the value for int_curr_symbol in the "en_US" locale includes a trailing space character that acts to separate the symbol from the following monetary value.
13.2.5 High-Level Numeric and Monetary Formatting: strfmon() and printf() After looking at all the fields in the struct lconv, you may be wondering, “Do I really have to figure out how to use all that information just to format a monetary value?” Fortunately, the answer is no.3 The strfmon() function does all the work for you: POSIX
#include ssize_t strfmon(char *s, size_t max, const char *format, ...);
This routine is much like strftime() (see Section 6.1.3.2, “Complex Time Formatting: strftime(),” page 171), using format to copy literal characters and formatted numeric values into s, placing no more than max characters into it. The following simple program, ch13-strfmon.c, demonstrates how strfmon() works: /* ch13-strfmon.c --- demonstrate strfmon() */ #include #include #include int main(void) { char buf[BUFSIZ]; double val = 1234.567; setlocale(LC_ALL, ""); strfmon(buf, sizeof buf, "You owe me %n (%i)\n", val, val); fputs(buf, stdout); return 0; }
When run in two different locales, it produces this output: $ LC_ALL=en_US ch13-strfmon You owe me $1,234.57 (USD 1,234.57) $ LC_ALL=it_IT ch13-strfmon You owe me EUR 1.235 (EUR 1.235)
In the United States In Italy
3 We’re as happy as you are, since we don’t have to provide example code that uses this, er, full-featured struct.
13.2 Locales and the C Library
499
As you can see, strfmon() is like strftime(), copying regular characters unchanged into the destination buffer and formatting arguments according to its own formatting specifications. There are only three: %n
Print the national (that is, local) form of the currency value.
%i
Print the international form of the currency value.
%%
Print a literal % character.
The values to be formatted must be of type double. We see the difference between %n and %i in the "en_US" locale: %n uses a $ character, whereas %i uses USD, which stands for “U.S. Dollars.” Flexibility—and thus a certain amount of complexity—comes along with many of the APIs that were developed for POSIX, and strfmon() is no exception. As with printf(), several optional items that can appear between the % and the i or n provide increased control. The full forms are as follows: %[flags][field width][#left-prec][.right-prec]i %[flags][field width][#left-prec][.right-prec]n %%
No flag, field width, etc., allowed
The flags are listed in Table 13.2. TABLE 13.2 Flags for strfmon()
Flag
Meaning
=c
Use the character c for the numeric fill character, for use with the left precision. The default fill character is a space. A common alternative fill character is 0.
^
Disable the use of the grouping character (for example, a comma in the United States).
(
Enclose negative amounts in parentheses. Mutually exclusive with the + flag.
+
Handle positive/negative values normally. Use the locale’s positive and negative signs. Mutually exclusive with the ( flag.
!
Do not include the currency symbol. This flag is useful if you wish to use strfmon() to get more flexible formatting of regular numbers than what sprintf() provides.
-
Left-justify the result. The default is right justification. This flag has no effect without a field width.
500
Chapter 13 • Internationalization and Localization
The field width is a decimal digit string, providing a minimum width. The default is to use as many characters as necessary based on the rest of the specification. Values smaller than the field width are padded with spaces on the left (or on the right, if the ‘-’ flag was given). The left precision consists of a # character and a decimal digit string. It indicates the minimum number of digits to appear to the left of the decimal point character;4 if the converted value is smaller than this, the result is padded with the numeric fill character. The default is a space, but the = flag can be used to change it. Grouping characters are not included in the count. Finally, the right precision consists of a ‘.’ character and a decimal digit string. This indicates how many digits to round the value to before it is formatted. The default is provided by the frac_digits and int_frac_digits fields in the struct lconv. If this value is 0, no decimal point character is printed. returns the number of characters placed into the buffer, not including the terminating zero byte. If there’s not enough room, it returns -1 and sets errno to E2BIG. strfmon()
Besides strfmon(), POSIX (but not ISO C) provides a special flag—the single-quote character, '—for the printf() formats %i, %d, %u, %f, %F, %g, and %G. In locales that supply a thousands separator, this flag adds the locale’s thousands separator. The following simple program, ch13-quoteflag.c, demonstrates the output: /* ch13-quoteflag.c --- demonstrate printf's quote flag */ #include #include int main(void) { setlocale(LC_ALL, ""); printf("%'d\n", 1234567); return 0; }
/* Have to do this, or it won't work */
Here’s what happens for two different locales: one that does not supply a thousands separator and one that does:
4 The technical term used in the standards is radix point, since numbers in different bases may have fractional parts
as well. However, for monetary values, it seems pretty safe to use the term “decimal point.”
13.2 Locales and the C Library
$ LC_ALL=C ch13-quoteflag 1234567 $ LC_ALL=en_US ch13-quoteflag 1,234,567
501
Traditional environment, no separator English in United States locale, has separator
As of this writing, only GNU/Linux and Solaris support the ' flag. Double-check your system’s printf (3) manpage.
13.2.6 Example: Formatting Numeric Values in gawk implements its own version of the printf() and sprintf() functions. For full locale awareness, gawk must support the ' flag, as in C. The following fragment, from the file builtin.c in gawk 3.1.4, shows how gawk uses the struct lconv for numeric formatting: gawk
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
case 'd': case 'i': ... tmpval = force_number(arg); ... uval = (uintmax_t) tmpval; ... ii = jj = 0; do { *--cp = (char) ('0' + uval % 10); #ifdef HAVE_LOCALE_H if (quote_flag && loc.grouping[ii] && ++jj == loc.grouping[ii]) { *--cp = loc.thousands_sep[0]; /* XXX - assumption it's one char */ if (loc.grouping[ii+1] == 0) jj = 0; /* keep using current val in loc.grouping[ii] */ else if (loc.grouping[ii+1] == CHAR_MAX) quote_flag = FALSE; else { ii++; jj = 0; } } #endif uval /= 10; } while (uval > 0);
(The line numbers are relative to the start of the fragment.) Some parts of the code that aren’t relevant to the discussion have been omitted to make it easier to focus on the parts that are important. The variable loc, used in lines 13–17, is a struct lconv. It’s initialized in main(). Of interest to us here are loc.thousands_sep, which is the thousands-separator
502
Chapter 13 • Internationalization and Localization
character, and loc.grouping, which is an array describing how many digits between separators. A zero element means “use the value in the previous element for all subsequent digits,” and a value of CHAR_MAX means “stop inserting thousands separators.” With that introduction, let’s look at the code. Line 7 sets uval, which is an unsigned version of the value to be formatted. ii and jj keep track of the position in loc.grouping and the number of digits in the current group that have been converted, respectively.5 quote_flag is true when a ' character has been seen in a conversion specification. The do-while loop generates digit characters in reverse, filling in a buffer from the back end toward the front end. Each digit is generated on line 11. Line 25 then divides by 10, shifting the value right by one decimal digit. Lines 12–24 are what interest us. The work is done only on a system that supports locales, as indicated by the presence of the header file. The symbolic constant HAVE_LOCALE_H will be true on such a system.6 When the condition on line 13 is true, it’s time to add in a thousands-separator character. This condition can be read in English as “if grouping is requested, and the current position in loc.grouping indicates an amount for grouping, and the current count of digits equals the grouping amount.” If this condition is true, line 14 adds the thousands separator character. The comment notes an assumption that is probably true but that might come back to haunt the maintainer at some later time. (The ‘XXX’ is a traditional way of marking dangerous or doubtful code. It’s easy to search for and very noticeable to readers of the code.) Once the current position in loc.grouping has been used, lines 15–22 look ahead at the value in the next position. If it’s 0, then the current position’s value should continue to be used. We specify this by resetting jj to 0 (line 16). On the other hand, if the next position is CHAR_MAX, no more grouping should be done, and line 18 turns it off entirely by setting quote_flag to false. Otherwise, the next value is a grouping value, so line 20 resets jj to 0, and line 21 increments ii.
5 We probably should have chosen more descriptive names than just ii and jj. Since the code that uses them is
short, our lack of imagination is not a significant problem. 6 This is set by the Autoconf and Automake machinery. Autoconf and Automake are powerful software suites that
make it possible to support a wide range of Unix systems in a systematic fashion.
13.2 Locales and the C Library
503
This is low-level, detailed code. However, once you understand how the information in the struct lconv is presented, the code is straightforward to read (and it was straightforward to write).
13.2.7 Formatting Date and Time Values: ctime() and strftime() Section 6.1, “Times and Dates,” page 166, described the functions for retrieving and formatting time and date values. The strftime() function is also locale-aware if setlocale() has been called appropriately. The following simple program, ch13-times.c demonstrates this: /* ch13-times.c --- demonstrate locale-based times */ #include #include #include int main(void) { char buf[100]; time_t now; struct tm *curtime; setlocale(LC_ALL, ""); time(& now); curtime = localtime(& now); (void) strftime(buf, sizeof buf, "It is now %A, %B %d, %Y, %I:%M %p", curtime); printf("%s\n", buf); printf("ctime() says: %s", ctime(& now)); exit(0); }
When the program is run, we see that indeed the strftime() results vary while the ctime() results do not: $ LC_ALL=en_US ch13-times It is now Friday, July 11, 2003, 10:35 AM ctime() says: Fri Jul 11 10:35:55 2003
Time in the United States
$ LC_ALL=it_IT ch13-times It is now venerdì, luglio 11, 2003, 10:36 ctime() says: Fri Jul 11 10:36:00 2003
Time in Italy
$ LC_ALL=fr_FR ch13-times It is now vendredi, juillet 11, 2003, 10:36 ctime() says: Fri Jul 11 10:36:05 2003
Time in France
504
Chapter 13 • Internationalization and Localization
The reason for the lack of variation is that ctime() (and asctime(), upon which is based) are legacy interfaces; they exist to support old code. strftime(), being a newer interface (developed initially for C89), is free to be locale-aware. ctime()
13.2.8 Other Locale Information: nl_langinfo() Although we said earlier that the catgets() API is hard to use, one part of that API is generally useful: nl_langinfo(). It provides additional locale-related information, above and beyond that which is available from the struct lconv: #include #include
XSI
char *nl_langinfo(nl_item item);
The header file defines the nl_item type. (This is most likely an int or an enum.) The item parameter is one of the symbolic constants defined in . The return value is a string that can be used as needed, either directly or as a format string for strftime(). The available information comes from several locale categories. Table 13.3 lists the item constants, the corresponding locale category, and the item’s meaning. An era is a particular time in history. As it relates to dates and times, it makes the most sense in countries ruled by emperors or dynasties.7 POSIX era specifications can describe eras before A.D. 1. In such a case, the start date has a higher absolute numeric value than the end date. For example, Alexander the Great ruled from 336 B.C. to 323 B.C. The value returned by ‘nl_langinfo(ERA)’, if not NULL, consists of one or more era specifications. Each specification is separated from the next by a ; character. Components of each era specification are separated from each other by a : character. The components are described in Table 13.4.
7 Although Americans often refer to the eras of particular presidents, these are not a formal part of the national
calendar in the same sense as in pre-World War II Japan or pre-Communist China.
13.2 Locales and the C Library
505
TABLE 13.3 Item values for nl_langinfo()
Item name
Category
Meaning
ABDAY_1, …, ABDAY_7
LC_TIME
The abbreviated names of the days of the week. Sunday is Day 1.
ABMON_1, …, ABMON_12
LC_TIME
The abbreviated names of the months.
ALT_DIGITS
LC_TIME
Alternative symbols for digits; see text.
AM_STR, PM_STR
LC_TIME
The a.m./p.m. notations for the locale.
CODESET
LC_TYPE
The name of the locale’s codeset; that is, the character set and encoding in use.
CRNCYSTR
LC_MONETARY
The local currency symbol, described below.
DAY_1, …, DAY_7
LC_TIME
The names of the days of the week. Sunday is Day 1.
D_FMT
LC_TIME
The date format.
D_T_FMT
LC_TIME
The date and time format.
ERA_D_FMT
LC_TIME
The era date format.
ERA_D_T_FMT
LC_TIME
The era date and time format.
ERA_T_FMT
LC_TIME
The era time format.
ERA
LC_TIME
Era description segments; see text.
MON_1, …, MON_12
LC_TIME
The names of the months.
RADIXCHAR
LC_NUMERIC
The radix character. For base 10, this is the decimal point character.
THOUSEP
LC_NUMERIC
The thousands-separator character.
T_FMT_AMPM
LC_TIME
The time format with a.m./p.m. notation.
T_FMT
LC_TIME
The time format.
YESEXPR, NOEXPR
LC_MESSAGES
Strings representing positive and negative responses.
Chapter 13 • Internationalization and Localization
506
TABLE 13.4 Era specification components
Component Meaning Direction
A + or ‘-’ character. A + indicates that the era runs from a numerically lower year to a numerically higher one, and a ‘-’ indicates the opposite.
Offset
The year closest to the start date of the era.
Start date
The date when the era began, in the form ‘yyyy/mm/dd’. These are the year, month, and day, respectively. Years before A.D. 1 use a negative value for yyyy.
End date
The date when the era ended, in the same form. Two additional special forms are allowed: -* means the “beginning of time,” and +* means the “end of time.”
Era name
The name of the era, corresponding to strftime()’s %EC conversion specification.
Era format
The format of the year within the era, corresponding to strftime()’s %EY conversion specification.
The ALT_DIGITS value also needs some explanation. Some locales provide for “alternative digits.” (Consider Arabic, which uses the decimal numbering system but different glyphs for the digits 0–9. Or consider a hypothetical “Ancient Rome” locale using roman numerals.) These come up, for example, in strftime()’s various %Oc conversion specifications. The return value for ‘nl_langinfo(ALT_DIGITS)’ is a semicolonseparated list of character strings for the alternative digits. The first should be used for 0, the next for 1, and so on. POSIX states that up to 100 alternative symbols may be provided. The point is to avoid restricting locales to the use of the ASCII digit characters when a locale has its own numbering system. Finally, ‘nl_langinfo(CRNCYSTR)’ returns the local currency symbol. The first character of the return value, if it’s a ‘-’, +, or ‘.’, indicates how the symbol should be used: -
The symbol should appear before the value.
+
The symbol should appear after the value.
.
The symbol should replace the radix character (decimal point).
13.3 Dynamic Translation of Program Messages
507
13.3 Dynamic Translation of Program Messages The standard C library interfaces just covered solve the easy parts of the localization problem. Monetary, numeric, and time and date values, as well as string collation issues, all lend themselves to management through tables of locale-specific data (such as lists of month and day names). However, most user interaction with a text-based program occurs in the form of the messages it outputs, such as prompts or error messages. The problem is to avoid having multiple versions of the same program that differ only in the contents of the message strings. The de facto solution in the GNU world is GNU gettext. (GUI programs face similar issues with the items in menus and menu bars; typically, each major user interface toolkit has its own way to solve that problem.) GNU gettext enables translation of program messages into different languages at runtime. Within the code for a program, this translation involves several steps, each of which uses different library functions. Once the program itself has been properly prepared, several shell-level utilities facilitate the preparation of translations into different languages. Each such translation is referred to as a message catalog.
13.3.1 Setting the Text Domain: textdomain() A complete application may contain multiple components: individual executables written in C or C++ or in scripting languages that can also access gettext facilities, such as gawk or the Bash shell. The components of the application all share the same text domain, which is a string that uniquely identifies the application. (Examples might be "gawk" or "coreutils"; the former is a single program, and the latter is a whole suite of programs.) The text domain is set with textdomain(): #include
GLIBC
char *textdomain(const char *domainname);
Each component should call this function with a string naming the text domain as part of the initial startup activity in main(). The return value is the current text domain. If the domainname argument is NULL, then the current domain is returned; otherwise, it is set to the new value and that value is then returned. A return value of NULL indicates an error of some sort. If the text domain is not set with textdomain(), the default domain is "messages".
Chapter 13 • Internationalization and Localization
508
13.3.2 Translating Messages: gettext() The next step after setting the text domain is to use the gettext() function (or a variant) for every string that should be translated. Several functions provide translation services: #include
GLIBC
char *gettext(const char *msgid); char *dgettext(const char *domainname, const char *msgid); char *dcgettext(const char *domainname, const char *msgid, int category);
The arguments used in these functions are as follows: const char *msgid
The string to be translated. It acts as a key into a database of translations. const char *domainname
The text domain from which to retrieve the translation. Thus, even though main() has called textdomain() to set the application’s own domain, messages can be retrieved from other text domains. (This is most applicable to messages that might be in the text domain for a third-party library, for example.) int category
One of the domain categories described earlier (LC_TIME, etc.). The default text domain is whatever was set with textdomain() ("messages" if textdomain() was never called). The default category is LC_MESSAGES. Assume that main() makes the following call: textdomain("killerapp");
Then, ‘gettext("my message")’ is equivalent to ‘dgettext("killerapp", "my message")’. Both of these, in turn, are equivalent to ‘dcgettext("killerapp", "my message", LC_MESSAGES)’. You will want to use gettext() 99.9 percent of the time. However, the other functions give you the flexibility to work with other text domains or locale categories. You are most likely to need this flexibility when doing library programming, since a standalone library will almost certainly be in its own text domain. All the functions return a string. The string is either the translation of the given msgid or, if no translation exists, the original string. Thus, there is always some output, even if it’s just the original (presumably English) message. For example:
13.3 Dynamic Translation of Program Messages
509
/* The canonical first program, localized version. */ #include #include #include int main(void) { setlocale(LC_ALL, ""); printf("%s\n", gettext("hello, world")); return 0; }
Although the message is a simple string, we don’t use it directly as the printf() control string, since in general, translations can contain % characters. Shortly, in Section 13.3.4, “Making gettext() Easy to Use,” page 510, we’ll see how to make gettext() easier to use in large-scale, real-world programs.
13.3.3 Working with Plurals: ngettext() Translating plurals provides special difficulties. Naive code might look like this: printf("%d word%s misspelled\n", nwords, nwords > 1 ? "s" : ""); /* or */ printf("%d %s misspelled\n", nwords, nwords == 1 ? "word" : "words");
This is reasonable for English, but translation becomes difficult. First of all, many languages don’t use as simple a plural form as English (adding an s suffix for most words). Second, many languages, particularly in Eastern Europe, have multiple plural forms, each indicating how many objects the form designates. Thus, even code like this isn’t enough: if (nwords == 1) printf("one word misspelled\n"); else printf("%d words misspelled\n", nwords);
The solution is a parallel set of routines specifically for formatting plural values: #include
GLIBC
char *ngettext(const char *msgid, const char *msgid_plural, unsigned long int n); char *dngettext(const char *domainname, const char *msgid, const char *msgid_plural, unsigned long int n); char *dcngettext(const char *domainname, const char *msgid, const char *msgid_plural, unsigned long int n, int category);
Chapter 13 • Internationalization and Localization
510
Besides the original msgid argument, these functions accept additional arguments: const char *msgid_plural
The default string to use for plural values. Examples shortly. unsigned long int n
The number of items there are. Each locale’s message catalog specifies how to translate plurals.8 The ngettext() function (and its variants) examines n and, based on the specification in the message catalog, returns the appropriate translation of msgid. If the catalog does not have a translation for msgid, or in the "C" locale, ngettext() returns msgid if ‘n == 1’; otherwise, it returns msgid_plural. Thus, our misspelled words example looks like this: printf("%s\n", ngettext("%d word misspelled", "%d words misspelled", nwords), nwords);
Note that nwords must be passed to ngettext() to select a format string, and then to printf() for formatting. In addition, be careful not to use a macro or expression whose value changes each time, like ‘n++’! Such a thing could happen if you’re doing global editing to add calls to ngettext() and you don’t pay attention.
13.3.4 Making gettext() Easy to Use The call to gettext() in program source code serves two purposes. First, it does the translation at runtime, which is the main point, after all. However, it also serves to mark the strings that need translating. The xgettext utility reads program source code and extracts all the original strings that need translation. (We briefly cover the mechanics of this later in the chapter.) Consider the case, though, of static strings that aren’t used directly: static char *copyrights[] = { "Copyright 2004, Jane Programmer", "Permission is granted ...", ... NULL };
LOTS of legalese here
8 The details are given in the GNU gettext documentation. Here, we’re focusing on the developer’s needs, not
the translator’s.
13.3 Dynamic Translation of Program Messages
511
void copyright(void) { int i; for (i = 0; copyrights[i] != NULL, i++) printf("%s\n", gettext(copyrights[i])); }
Here, we’d like to be able to print the translations of the copyright strings if they’re available. However, how is the xgettext extractor supposed to find these strings? We can’t enclose them in calls to gettext() because that won’t work at compile time: /* BAD CODE: won't compile */ static char *copyrights[] = { gettext("Copyright 2004, Jane Programmer"), gettext("Permission is granted ..."), ... LOTS of legalese here NULL };
13.3.4.1 Portable Programs: "gettext.h" We assume here that you wish to write a program that can be used along with the GNU gettext library on any Unix system, not just GNU/Linux systems. The next section describes what to do for GNU/Linux-only programs. The solution to marking strings involves two steps. The first is the use of the gettext.h convenience header that comes in the GNU gettext distribution. This file handles several portability and compilation issues, making it easier to use gettext() in your own programs: #define ENABLE_NLS 1 #include "gettext.h"
ENABLE_NLS must be true for gettext() to work Instead of 9
If the ENABLE_NLS macro is not defined or it’s set to zero, then gettext.h expands calls to gettext() into the first argument. This makes it possible to port code using gettext() to systems that have neither GNU gettext installed nor their own version. Among other things, this header file defines the following macro:
9 This macro is usually automatically defined by the configure program, either in a special header or on the compiler command line. configure is generated with Autoconf and Automake.
512
Chapter 13 • Internationalization and Localization
/* A pseudo function call that serves as a marker for the automated extraction of messages, but does not call gettext(). The run-time translation is done at a different place in the code. The argument, String, should be a literal string. Concatenated strings and other string expressions won't work. The macro's expansion is not parenthesized, so that it is suitable as initializer for static 'char[]' or 'const char[]' variables. */ #define gettext_noop(String) String
The comment is self-explanatory. With this macro, we can now proceed to the second step. We rewrite the code as follows: #define ENABLE_NLS 1 #include "gettext.h" static char copyrights[] = gettext_noop("Copyright 2004, Jane Programmer\n" "Permission is granted ...\n" ... LOTS of legalese here "So there."); void copyright(void) { printf("%s\n", gettext(copyrights)); }
Note that we made two changes. First, copyrights is now one long string, built up by using the Standard C string constant concatenation feature. This single string is then enclosed in the call to gettext_noop(). We need a single string so that the legalese can be translated as a single entity. The second change is to print the translation directly, as one string in copyright(). By now, you may be thinking, “Gee, having to type ‘gettext(...)’ each time is pretty painful.” Well, you’re right. Not only is it extra work to type, it makes program source code harder to read as well. Thus, once you are using the gettext.h header file, the GNU gettext manual recommends the introduction of two more macros, named _() and N_(), as follows: #define ENABLE_NLS 1 #include "gettext.h" #define _(msgid) gettext(msgid) #define N_(msgid) msgid
This approach reduces the burden of using gettext() to just three extra characters per translatable string constant and only four extra characters for static strings:
13.3 Dynamic Translation of Program Messages
513
#include #define ENABLE_NLS 1 #include "gettext.h" #define _(msgid) gettext(msgid) #define N_(msgid) msgid ... static char copyrights[] = N_("Copyright 2004, Jane Programmer\n" "Permission is granted ...\n" ... LOTS of legalese here "So there."); void copyright(void) { printf("%s\n", gettext(copyrights)); } int main(void) { setlocale(LC_ALL, ""); /* gettext.h gets for us too */ printf("%s\n", _("hello, world")); copyright(); exit(0); }
These macros are unobtrusive, and in practice, all GNU programs that use GNU use this convention. If you intend to use GNU gettext, you too should follow this convention. gettext
13.3.4.2 GLIBC Only: For a program that will only be used on systems with GLIBC, the header file usage and macros are similar, but simpler: #include #include #define _(msgid) gettext(msgid) #define N_(msgid) msgid
… everything else is the same …
As we saw earlier, the header file declares gettext() and the other functions. You still have to define _() and N_(), but you don’t have to worry about ENABLE_NLS, or distributing gettext.h with your program’s source code.
514
Chapter 13 • Internationalization and Localization
13.3.5 Rearranging Word Order with printf() When translations are produced, sometimes the word order that is natural in English is incorrect for other languages. For instance, while in English an adjective appears before the noun it modifies, in many languages it appears after the noun. Thus, code like the following presents a problem: char *animal_color, *animal; if (...) { animal_color = _("brown"); animal = _("cat"); } else if (...) { ... } else { ... } printf(_("The %s %s looks at you enquiringly.\n"), animal_color, color);
Here, the format string, animal_color and animal are all properly enclosed in calls to gettext(). However, the statement will still be incorrect when translated, since the order of the arguments cannot be changed at runtime. To get around this, the POSIX (but not ISO C) version of the printf() family allows you to provide a positional specifier within a format specifier. This takes the form of a decimal number followed by a $ character immediately after the initial % character. For example: printf("%2$s, %1$s\n", "world", "hello");
The positional specifier indicates which argument in the argument list to use; counts begin at 1 and don’t include the format string itself. This example prints the famous ‘hello, world’ message in the correct order. GLIBC and Solaris implement this capability. As it’s part of POSIX, if your Unix vendor’s printf() doesn’t have it, it should be appearing soon. Any of the regular printf() flags, a field width, and a precision may follow the positional specifier. The rules for using positional specifiers are these: • The positional specifier form may not be mixed with the nonpositional form. In other words, either every format specifier includes a positional specifier or none of them do. Of course, %% can always be used.
13.3 Dynamic Translation of Program Messages
515
• If the N ’th argument is used in the format string, all the arguments up to N must also be used by the string. Thus, the following is invalid: printf("%3$s %1$s\n", "hello", "cruel", "world");
• A particular argument may be referenced with a positional specifier multiple times. Nonpositional format specifications always move through the argument list sequentially. This facility isn’t intended for direct use by application programmers, but rather by translators. For example, a French translation for the previous format string, "The %s %s looks at you enquiringly.\n", might be: "Le %2$s %1$s te regarde d'un aire interrogateur.\n"
(Even this translation isn’t perfect: the article “Le” is gender specific. Preparing a program for translation is a hard job!)
13.3.6 Testing Translations in a Private Directory The collection of messages in a program is referred to as the message catalog. This term also applies to each translation of the messages into a different language. When a program is installed, each translation is also installed in a standard location, where gettext() can find the right one at runtime. It can be useful to place translations in a directory other than the standard one, particularly for program testing. Especially on larger systems, a regular developer probably does not have the permissions necessary to install files in system directories. The bindtextdomain() function gives gettext() an alternative place to look for translations: #include
GLIBC
char *bindtextdomain(const char *domainname, const char *dirname);
Useful directories include ‘.’ for the current directory and /tmp. It might also be handy to get the directory from an environment variable, like so: char *td_dir; setlocale(LC_ALL, ""); textdomain("killerapp"); if ((td_dir = getenv("KILLERAPP_TD_DIR")) != NULL) bindtextdomain("killerapp", td_dir);
516
Chapter 13 • Internationalization and Localization
bindtextdomain() should be called before any calls to the gettext() family of functions. We see an example of how to use it in Section 13.3.8, “Creating Translations,” page 517.
13.3.7 Preparing Internationalized Programs So far, we’ve looked at all the components that go into an internationalized program. This section summarizes the process. 1. Adopt the gettext.h header file into your application, and add definitions for the _() and N_() macros to a header file that is included by all your C source files. Don’t forget to define the ENABLE_NLS symbolic constant. 2. Call setlocale() as appropriate. It is easiest to call ‘setlocale(LC_ALL, "")’, but occasionally an application may need to be more picky about which locale categories it enables. 3. Pick a text domain for the application, and set it with textdomain(). 4. If testing, bind the text domain to a particular directory with bindtextdomain(). 5. Use strfmon(), strftime(), and the ' flag for printf() as appropriate. If other locale information is needed, use nl_langinfo(), particularly in conjunction with strftime(). 6. Mark all strings that should be translated with calls to _() or N_(), as appropriate. A few should not be so marked though. For example, if you use getopt_long() (see Section 2.1.2, “GNU Long Options,” page 27), you probably don’t want the long option names to be marked for translation. Also, simple format strings like "%d %d\n" don’t need to be translated, nor do debugging messages. 7. When appropriate, use ngettext() (or its variants) for dealing with values that can be either 1 or greater than 1.
13.3 Dynamic Translation of Program Messages
517
8. Make life easier for your translators by using multiple strings representing complete sentences instead of doing word substitutions with %s and ?:. For example: if (an error occurred) { /* RIGHT */ /* Use multiple strings to make translation easier. */ if (input_type == INPUT_FILE) fprintf(stderr, _("%s: cannot read file: %s\n"), argv[0], strerror(errno)); else fprintf(stderr, _("%s: cannot read pipe: %s\n"), argv[0], strerror(errno)); }
This is better than if (an error occurred) { /* WRONG */ fprintf(stderr, _("%s: cannot read %s: %s\n"), argv[0], input_type == INPUT_FILE ? _("file") : _("pipe"), strerror(errno)); }
As just shown, it’s a good idea to include a comment stating that there are multiple messages on purpose—to make it easier to translate the messages.
13.3.8 Creating Translations Once your program has been internationalized, it’s necessary to prepare translations. This is done with several shell-level tools. We start with an internationalized version of ch06-echodate.c, from Section 6.1.4, “Converting a Broken-Down Time to a time_t,” page 176: /* ch13-echodate.c --- demonstrate translations */ #include #include #include #define ENABLE_NLS 1 #include "gettext.h" #define _(msgid) gettext(msgid) #define N_(msgid) msgid
Chapter 13 • Internationalization and Localization
518
int main(void) { struct tm tm; time_t then; setlocale(LC_ALL, ""); bindtextdomain("echodate", "."); textdomain("echodate"); printf("%s", _("Enter a Date/time as YYYY/MM/DD HH:MM:SS : ")); scanf("%d/%d/%d %d:%d:%d", & tm.tm_year, & tm.tm_mon, & tm.tm_mday, & tm.tm_hour, & tm.tm_min, & tm.tm_sec); /* Error checking on values omitted for brevity. */ tm.tm_year -= 1900; tm.tm_mon -= 1; tm.tm_isdst = -1;
/* Don't know about DST */
then = mktime(& tm); printf(_("Got: %s"), ctime(& then)); exit(0); }
We have purposely used "gettext.h" and not . If our application ships with a private copy of the gettext library, then "gettext.h" will find it, avoiding the system’s copy. On the other hand, if there is only a system copy, it will be found if there is no local copy. The situation is admittedly complicated by the fact that Solaris systems also have a gettext library which is not as featureful as the GNU version. Moving on to creating translations, the first step is to extract the translatable strings. This is done with the xgettext program: $ xgettext --keyword=_ --keyword=N_ \ > --default-domain=echodate ch13-echodate.c
The --keyword options tell xgettext to look for the _() and N_() macros. It already knows to extract strings from gettext() and its variants, as well as from gettext_noop(). The output from xgettext is called a portable object file. The default filename is messages.po, corresponding to the default text domain of "messages". The --default-domain option indicates the text domain, for use in naming the output file. In this case, the file is named echodate.po. Here are its contents:
13.3 Dynamic Translation of Program Messages
519
# SOME DESCRIPTIVE TITLE. Boilerplate, to be edited # Copyright (C) YEAR THE PACKAGE'S COPYRIGHT HOLDER # This file is distributed under the same license as the PACKAGE package. # FIRST AUTHOR , YEAR. # #, fuzzy msgid "" Detailed information msgstr "" Each translator completes "Project-Id-Version: PACKAGE VERSION\n" "Report-Msgid-Bugs-To: \n" "POT-Creation-Date: 2003-07-14 18:46-0700\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "Last-Translator: FULL NAME \n" "Language-Team: LANGUAGE \n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=CHARSET\n" "Content-Transfer-Encoding: 8bit\n" #: ch13-echodate.c:19 msgid "Enter a Date/time as YYYY/MM/DD HH:MM:SS : " msgstr ""
Message location Original message Translation goes here
#: ch13-echodate.c:32 #, c-format msgid "Got: %s" msgstr ""
Same for each message
This original file is reused for each translation. It is thus a template for translations, and by convention it should be renamed to reflect this fact, with a .pot (portable object template) suffix: $ mv echodate.po echodate.pot
Given that we aren’t fluent in many languages, we have chosen to translate the messages into pig Latin. Thus, the next step is to produce a translation. We do this by copying the template file and adding translations to the new copy: $ cp echodate.pot piglat.po $ vi piglat.po
Add translations, use your favorite editor
The filename convention is language.po where language is the two- or threecharacter international standard abbreviation for the language. Occasionally the form language_country.po is used: for example, pt_BR.po for Portugese in Brazil. As pig Latin isn’t a real language, we’ve called the file piglat.po. Here are the contents, after the translations have been added:
Chapter 13 • Internationalization and Localization
520
# echodate translations into pig Latin # Copyright (C) 2004 Prentice-Hall # This file is distributed under the same license as the echodate package. # Arnold Robbins 2004 # #, fuzzy msgid "" msgstr "" "Project-Id-Version: echodate 1.0\n" "Report-Msgid-Bugs-To: [email protected]\n" "POT-Creation-Date: 2003-07-14 18:46-0700\n" "PO-Revision-Date: 2003-07-14 19:00+8\n" "Last-Translator: Arnold Robbins \n" "Language-Team: Pig Latin \n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=ASCII\n" "Content-Transfer-Encoding: 8bit\n" #: ch13-echodate.c:19 msgid "Enter a Date/time as YYYY/MM/DD HH:MM:SS : " msgstr "Enteray A Ateday/imetay asay YYYY/MM/DD HH:MM:SS : " #: ch13-echodate.c:32 #, c-format msgid "Got: %s" msgstr "Otgay: %s"
While it would be possible to do a linear search directly in the portable object file, such a search would be slow. For example, gawk has approximately 350 separate messages, and the GNU Coreutils have over 670. Linear searching a file with hundreds of messages would be noticeably slow. Therefore, GNU gettext uses a binary format for fast message lookup. msgfmt does the compilation, producing a message object file: $ msgfmt piglat.po -o piglat.mo
As program maintenance is done, the strings used by a program change: new strings are added, others are deleted or changed. At the very least, a string’s location in the source file may move around. Thus, translation .po files will likely get out of date. The msgmerge program merges an old translation file with a new .pot file. The result can then be updated. This example does a merge and then recompiles: $ $ $ $
msgmerge piglat.po echodate.pot -o piglat.new.po mv piglat.new.po piglat.po vi piglat.po msgfmt piglat.po -o piglat.mo
Merge files Rename the result Bring translations up to date Recreate .mo file
Compiled .mo files are placed in the file base/locale/category/textdomain.mo. On GNU/Linux systems, base is /usr/share/locale. locale is the language, such as ‘es’, ‘fr’, and so on. category is a locale category; for messages, it is LC_MESSAGES.
13.4 Can You Spell That for Me, Please?
521
is the text domain of the program: in our case, echodate. As a real example, the GNU Coreutils Spanish translation is in /usr/share/locale/es/ LC_MESSAGES/coreutils.mo. textdomain
The bindtextdomain() function changes the base part of the location. In ch13-echodate.c, we change it to ‘.’. Thus, it’s necessary to make the appropriate directories, and place the pig Latin translation there: $ mkdir -p en_US/LC_MESSAGES $ cp piglat.mo en_US/LC_MESSAGES/echodate.mo
Have to use a real locale Put the file in the right place
A real locale must be used;10 thus, we “pretend” by using "en_US". With the translation in place, we set LC_ALL appropriately, cross our fingers, and run the program: $ LC_ALL=en_US ch13-echodate Run the program Enteray A Ateday/imetay asay YYYY/MM/DD HH:MM:SS : 2003/07/14 21:19:26 Otgay: Mon Jul 14 21:19:26 2003
The latest version of GNU gettext can be found in the GNU gettext distribution directory.11 This section has necessarily only skimmed the surface of the localization process. GNU gettext provides many tools for working with translations, and in particular for making it easy to keep translations up to date as program source code evolves. The manual process for updating translations is workable but tedious. This task is easily automated with make; in particular GNU gettext integrates well with Autoconf and Automake to provide this functionality, removing considerable development burden from the programmer. We recommend reading the GNU gettext documentation to learn more about both of these issues in particular and about GNU gettext in general.
13.4 Can You Spell That for Me, Please? In the very early days of computing, different systems assigned different correspondences between numeric values and glyphs—symbols such as letters, digits, and punctuation used for communication with humans. Eventually, two widely used standards emerged: the EBCDIC encoding used on IBM and workalike mainframes, and ASCII, 10 We spent a frustrating 30 or 45 minutes attempting to use a piglat/LC_MESSAGES directory and setting ‘LC_ALL=piglat’, all to no effect, until we figured this out. 11 ftp://ftp.gnu.org/gnu/gettext
522
Chapter 13 • Internationalization and Localization
used on everything else. Today, except on mainframes, ASCII is the basis for all other character sets currently in use. The original seven-bit ASCII character set suffices for American English and most punctuation and special characters (such as $, but there is no character for the “cent” symbol). However, there are many languages and many countries with different character set needs. ASCII doesn’t handle the accented versions of the roman characters used in Europe, and many Asian languages have thousands of characters. New technologies have evolved to solve these deficiencies. The i18n literature abounds with references to three fundamental terms. Once we define them and their relationship to each other, we can present a general description of the corresponding C APIs. Character set A definition of the meaning assigned to different integer values; for example, that A is 65. Any character set that uses more than eight bits per character is termed a multibyte character set. Character set encoding ASCII uses a single byte to represent characters. Thus, the integer value is stored as itself, directly in disk files. More recent character sets, most notably different versions of Unicode,12 use 16-bit or even 32-bit integer values for representing characters. For most of the defined characters, one, two, or even three of the higher bytes in the integer are zero, making direct storage of their values in disk files expensive. The encoding describes a mechanism for converting 16- or 32-bit values into one to six bytes for storage on disk, such that overall there is a significant space savings. Language The rules for a given language dictate character set usage. In particular, the rules affect the ordering of characters. For example, in French, e, é, and è should all come between d and f, no matter what numerical values are assigned to those characters. Different languages can (and do) assign different orderings to the same glyphs.
12 http://www.unicode.org
13.4 Can You Spell That for Me, Please?
523
Various technologies have evolved over time for supporting multibyte character sets. Computing practice is slowly converging on Unicode and its encoding, but Standard C and POSIX support both past and present techniques. This section provides a conceptual overview of the various facilities. We have not had to use them ourselves, so we prefer to merely introduce them and provide pointers to more information.
13.4.1 Wide Characters We start with the concept of a wide character. A wide character is an integer type that can hold any value of the particular multibyte character set being used. Wide characters are represented in C with the type wchar_t. C99 provides a corresponding wint_t type, which can hold any value that a wchar_t can hold, and the special value WEOF, which is analogous to regular EOF from . The various types are defined in the header file. A number of functions similar to those of are defined by the header file, such as iswalnum(), and many more. Wide characters may be 16 to 32 bits in size, depending on the implementation. As mentioned, they’re intended for manipulating data in memory and are not usually stored directly in files. For wide characters, the C standard provides a large number of functions and macros that correspond to the traditional functions that work on char data. For example, wprintf(), iswlower(), and so on. These are documented in the GNU/Linux manpages and in books on Standard C.
13.4.2 Multibyte Character Encodings Strings of wide characters are stored on disk by being converted to a multibyte character set encoding in memory, and the converted data is then written to a disk file. Similarly, such strings are read in from disk through low-level block I/O, and converted in memory from the encoded version to the wide-character version. Many defined encodings represent multibyte characters by using shift states. In other words, given an input byte stream, byte values represent themselves until a special control value is encountered. At that point, the interpretation changes according to the current shift state. Thus, the same eight-bit value can have two meanings: one for the
524
Chapter 13 • Internationalization and Localization
normal, unshifted state, and another for the shifted state. Correctly encoded strings are supposed to start and end in the same shift state. A significant advantage to Unicode is that its encodings are self-correcting; the encondings don’t use shift states, so a loss of data in the middle does not corrupt the subsequent encoded data. The initial versions of the multibyte-to-wide-character and wide-character-tomultibyte functions maintained a private copy of the state of the translation (for example, the shift state, and anything else that might be necessary). This design limits the functions’ use to one kind of translation throughout the life of the program. Examples are mblen() (multibyte-character length), mbtowc() (multibyte to wide character), and wctomb() (wide character to multibyte), mbstowcs() (multibyte string to wide-character string), and wcstombs() (wide-character string to multibyte string). The newer versions of these routines are termed restartable. This means that the userlevel code maintains the state of the translation in a separate object, of type mbstate_t. The corresponding examples are mbrlen(), mbrtowc(), and wcrtomb(), mbsrtowcs() and wcsrtombs(). (Note the r, for “restartable,” in their names.)
13.4.3 Languages Language issues are controlled by the locale. We’ve already seen setlocale() earlier in the chapter. POSIX provides an elaborate mechanism for defining the rules by which a locale works; see the GNU/Linux locale(5) manpage for some of the details and the POSIX standard itself for the full story. The truth is, you really don’t want to know the details. Nor should you, as an application developer, need to worry about them; it is up to the library implementors to make things work. All you need to do is understand the concepts and make your code use the appropriate functions, such as strcoll() (see Section 13.2.3, “String Collation: strcoll() and strxfrm(),” page 490). Current GLIBC systems provide excellent locale support, including a multibyteaware suite of regular expression matching routines. For example, the POSIX extended regular expression [[:alpha:]][[:alnum:]]+ matches a letter followed by one or more letters or digits (an alphabetic character followed by one or more alphanumeric
13.4 Can You Spell That for Me, Please?
525
ones). The definition of which characters matches these classes depends on the locale. For example, this regular expression would match the two characters ‘eè’, whereas the traditional Unix, ASCII-oriented regular expression [a-zA-Z][a-A-Zz0-9]+ most likely would not. The POSIX character classes are listed in Table 13.5. TABLE 13.5 POSIX regular expression character classes
Class name Matches [:alnum:]
Alphanumeric characters.
[:alpha:]
Alphabetic characters.
[:blank:]
Space and TAB characters.
[:cntrl:]
Control characters.
[:digit:]
Numeric characters.
[:graph:]
Characters that are both printable and visible. (A newline is printable but not visible, whereas a $ is both.)
[:lower:]
Lowercase alphabetic characters.
[:print:]
Printable characters (not control characters).
[:punct:]
Punctuation characters (not letters, digits, control characters, or space characters).
[:space:]
Space characters (such as space, TAB, newline, and so on).
[:upper:]
Uppercase alphabetic characters.
[:xdigit:] Characters from the set abcdefABCDEF0123456789.
13.4.4 Conclusion You may never have to deal with different character sets and encodings. On the other hand, the world is rapidly becoming a “global village,” and software authors and vendors can’t afford to be parochial. It pays, therefore, to be aware of internationalization issues and character set issues and the way in which they affect your system’s behavior. Already, at least one vendor of GNU/Linux distributions sets the default locale to be en_US.UTF-8 for systems in the United States.
Chapter 13 • Internationalization and Localization
526
13.5 Suggested Reading 1. C, A Reference Manual, 5th edition, by Samuel P. Harbison III and Guy L. Steele, Jr., Prentice-Hall, Upper Saddle River, New Jersey, USA, 2002. ISBN: 0-13-089592-X. We have mentioned this book before. It provides a concise and comprehensible description of the evolution and use of the multibyte and wide-character facilities in the C standard library. This is particularly valuable on modern systems supporting C99 because the library was significantly enhanced for the 1999 C standard. 2. GNU gettext tools, by Ulrich Drepper, Jim Meyering, François Pinard, and Bruno Haible. This is the manual for GNU gettext. On a GNU/Linux system, you can see the local copy with ‘info gettext’. Or download and print the latest version (from ftp://ftp.gnu.org/gnu/gettext/).
13.6 Summary • Program internationalization and localization fall under the general heading of native language support. i18n, l10n, and NLS are popular acronyms. The central concept is the locale, which customizes the character set, date, time, and monetary and numeric information for the current language and country. • Locale awareness must be enabled with setlocale(). Different locale categories provide access to the different kinds of locale information. Locale-unaware programs act as if they were in the "C" locale, which produces results typical of Unix systems before NLS: 7-bit ASCII, English names for months and days, and so on. The "POSIX" locale is equivalent to the "C" one. • Locale-aware string comparisons are done with strcoll() or with the combination of strxfrm() and strcmp(). Library facilities provide access to locale information (localeconv() and nl_langinfo()) as well as locale-specific information formatting (strfmon(), strftime(), and printf()). • The flip side of retrieving locale-related information is producing messages in the local language. The System V catgets() design, while standardized by POSIX,
13.7 Exercises
527
is difficult to use and not recommended.13 Instead, GNU gettext implements and extends the original Solaris design. • With gettext(), the original English message string acts as a key into a binary translation file from which to retrieve the string’s translation. Each application specifies a unique text domain so that gettext() can find the correct translation file (known as a “message catalog”). The text domain is set with textdomain(). For testing, or as otherwise needed, the location for message catalogs can be changed with bindtextdomain(). • Along with gettext(), variants provide access to translations in different text domains or different locale categories. Additionally, the ngettext() function and its variants enable correct plural translations without overburdening the developer. The positional specifier within printf() format specifiers enables translation of format strings where arguments need to be printed in a different order from the one in which they’re provided. • In practice, GNU programs use the gettext.h header file and _() and N_() macros for marking translatable strings in their source files. This practice keeps program source code readable and maintainable while still providing the benefits of i18n and l10n. • GNU gettext provides numerous tools for the creation and management of translation databases (portable object files) and their binary equivalents (message object files). • Finally, it pays to be aware of character set and encoding issues. Software vendors can no longer afford to assume that their users are willing to work in only one language.
Exercises 1. Does your system support locales? If so, what is the default locale? 2. Look at the locale(1) manpage if you have it. How many locales are there if you count them with ‘locale -a | wc -l’?
13 GNU/Linux supports it, but only for compatibility.
528
Chapter 13 • Internationalization and Localization
3. Experiment with ch13-strings.c, ch13-lconv.c, ch13-strfmon.c, ch13-quoteflag.c, and ch13-times.c in different locales. What is the most “unusual” locale you can find, and why? 4. Take one of your programs. Internationalize it to use GNU gettext. Try to find someone who speaks another language to translate the messages for you. Compile the translation, and test it by using bindtextdomain(). What was your translator’s reaction upon seeing the translations in use?
14 Extended Interfaces
In this chapter 14.1 Allocating Aligned Memory: posix_memalign() and memalign()
page 530
14.2 Locking Files
page 531
14.3 More Precise Times
page 543
14.4 Advanced Searching with Binary Trees
page 551
14.5 Summary
page 562
Exercises
page 563
529
his chapter describes several extended APIs. The APIs here are similar in nature to those described earlier in the book or provide additional facilities. Some of them could not be discussed easily until after the prerequisite topics were covered.
T
The presentation order here parallels the order of the chapters in the first half of the book. The topics are not otherwise related to each other. We cover the following topics: dynamically allocating aligned memory; file locking; a number of calls that work with subsecond time values; and a more advanced suite of functions for storing and retrieving arbitrary data values. Unless stated otherwise, all the APIs in this chapter are included in the POSIX standard.
14.1 Allocating Aligned Memory: posix_memalign() and memalign() For most tasks, the standard allocation routines, malloc(), realloc(), and so on, are fine. Occasionally, though, you may need memory that is aligned a certain way. In other words, the address of the first allocated byte is a multiple of some number. (For example, on some systems, memory copies are significantly faster into and out of wordaligned buffers.) Two functions offer this service: #include int posix_memalign(void **memptr, size_t alignment, size_t size); void *memalign(size_t boundary, size_t size);
POSIX ADV Common
posix_memalign() is a newer function; it’s part of yet another optional extension,
the “Advisory Information” (ADV) extension. The function works differently from most other allocation Linux APIs. It does not return -1 when there’s a problem. Rather, the return value is 0 on success or an errno value on failure. The arguments are as follows: void **memptr
A pointer to a void * variable. The pointed-to variable will have the address of the allocated storage placed into it. The allocated storage can be released with free(). size_t alignment
The required alignment. It must be a multiple of sizeof(void *) and a power of 2.
530
14.2 Locking Files
531
size_t size
The number of bytes to allocate. memalign() is a nonstandard but widely available function that works similarly. The return value is NULL on failure or the requested storage on success, with boundary (a power of 2) indicating the alignment and size indicating the requested amount of memory.
Traditionally, storage allocated with memalign() could not be released with free(), since memalign() would allocate storage with malloc() and return a pointer to a suitably-aligned byte somewhere within that storage. The GLIBC version does not have this problem. Of the two, you should use posix_memalign() if you have it.
14.2 Locking Files Modern Unix systems, including GNU/Linux, allow you to lock part or all of a file for reading and writing. Like many parts of the Unix API that developed after V7, there are multiple, conflicting ways to do file locking. This section covers the possibilities.
14.2.1 File Locking Concepts Just as the lock in your front door prevents unwanted entry into your home, a lock on a file prevents access to data within the file. File locking was added to Unix after the development of V7 (from which all modern Unix systems are descended) and thus, for a while, multiple, conflicting file-locking mechanisms were available and in use on different Unix systems. Both BSD Unix and System V had their own incompatible locking calls. Eventually, POSIX formalized the System V way of doing file locks. Fortunately, the names of the calls were different between System V and BSD, so GNU/Linux, in an effort to please everyone, supports both kinds of locks. Table 14.1 summarizes the different kinds of locks. There are multiple aspects to locking, as follows: Record locking A record lock is a lock on a portion of the file. Since Unix files are just byte streams, it would be more correct to use the term range lock since the lock is on a range of bytes. Nevertheless, the term “record lock” is in common use.
Chapter 14 • Extended Interfaces
532
TABLE 14.1 File locking functions
Source
Function
BSD
flock()
POSIX
fcntl()
POSIX
lockf()
Record
Whole file
R/W
Advisory
Mandatory
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
Whole file locking A whole file lock, as the name implies, locks the entire file, even if its size changes while the lock is held. The BSD interface provides only whole-file locking. To lock the whole file using the POSIX interface, specify a length of zero. This is treated specially to mean “the entire file.” Read locking A read lock prevents writing on the area being read. There may be multiple read locks on a file, and even on the same region of a file, without them interfering with each other, since data are only being accessed and not changed. Write locking A write lock provides exclusive access to the area being written. If that area is blocked with a read lock, the attempt to acquire a write lock either blocks or fails, depending on the type of lock request. Once a write lock has been acquired, an attempt to acquire a read lock fails. Advisory locking An advisory lock closely matches your front-door lock. It’s been said that “locks keep honest people honest,” meaning that if someone really wishes to break into your house, he will probably find a way to do so, despite the lock in your front door. So too with an advisory lock; it only works when everyone attempting to access the locked file first attempts to acquire the lock. However, it’s possible for a program to completely ignore any advisory locks and do what it pleases with the file (as long as the file permissions allow it, of course). Mandatory locking A mandatory lock is a stronger form of lock: When a mandatory lock is in place, no other process can access the locked file. Any process that attempts to ignore
14.2 Locking Files
533
the lock either blocks until the lock becomes available or will have its operation fail. (Under GNU/Linux, at least, this includes root!) Advisory locking is adequate for cooperating programs that share a private file, when no other application is expected to use the file. Mandatory locking is advisable in situations in which avoiding conflicting file use is critical, such as in commercial database systems. POSIX standardizes only advisory locking. Mandatory locking is available on GNU/Linux, as well as on a number of commercial Unix systems, but the details vary. We cover the GNU/Linux details later in this section.
14.2.2 POSIX Locking: fcntl() and lockf() The fcntl() (file control) system call is used for file locking. (Other uses for fnctl() were described in Section 9.4.3, “Managing File Attributes: fcntl(),” page 328.) It is declared as follows: POSIX
#include #include int fcntl(int fd, int cmd); int fcntl(int fd, int cmd, long arg); int fcntl(int fd, int cmd, struct flock *lock);
Not relevant for file locking Not relevant for file locking
The arguments are as follows: fd
The file descriptor for the open file.
cmd
One of the symbolic constants defined in . These are described in more detail below.
lock
A pointer to a struct flock describing the desired lock.
14.2.2.1 Describing a Lock Before looking at how to get a lock, let’s examine how to describe a lock to the operating system. You do so with the struct flock structure, which describes the byte range to be locked and the kind of lock being requested. The POSIX standard states that a struct flock contains “at least” certain members. This allows implementations to provide additional structure members if so desired. From the fcntl(3) manpage, slightly edited:
Chapter 14 • Extended Interfaces
534
struct flock { ... short l_type; short l_whence; off_t l_start; off_t l_len; pid_t l_pid; ... };
Type of lock: F_RDLCK, F_WRLCK, F_UNLCK How to interpret l_start: SEEK_SET, SEEK_CUR, SEEK_END Starting offset for lock Number of bytes to lock; 0 means from start to end-of-file PID of process blocking our lock (F_GETLK only)
The l_start field is the starting byte offset for the lock. l_len is the length of the byte range, that is, the total number of bytes to lock. l_whence specifies the point in the file that l_start is relative to; the values are the same as for the whence argument to lseek() (see Section 4.5, “Random Access: Moving Around within a File,” page 102), hence the name for the field. The structure is thus self-contained: The l_start offset and l_whence value are not related to the current file offset for reading and writing. Some example code might look like this: struct employee { /* whatever */ }; struct flock lock; ... /* Lock sixth struct employee */ lock.l_whence = SEEK_SET; lock.l_start = 5 * sizeof(struct employee); lock.l_len = sizeof(struct employee);
/* Describe an employee */ /* Lock structure */
/* Absolute position */ /* Start of 6th structure */ /* Lock one record */
Using SEEK_CUR or SEEK_END, you can lock ranges relative to the current position in the file, or relative to the end of the file, respectively. For these two cases, l_start may be negative, as long as the absolute starting position is not less than zero. Thus, to lock the last record in a file: /* Lock last struct employee */ lock.l_whence = SEEK_END; lock.l_start = -1 * sizeof(struct employee); lock.l_len = sizeof(struct employee);
/* Relative to EOF */ /* Start of last structure */ /* Lock one record */
Setting l_len to 0 is a special case. It means lock the file from the starting position indicated by l_start and l_whence through the end of the file. This includes any positions past the end of the file, as well. (In other words, if the file grows while the lock is held, the lock is extended so that it continues to cover the entire file.) Thus, locking the entire file is a degenerate case of locking a single record: lock.l_whence = SEEK_SET; lock.l_start = 0; lock.l_len = 0;
/* Absolute position */ /* Start of file */ /* Through end of file */
14.2 Locking Files
535
The fnctl(3) manpage has this note: POSIX 1003.1–2001 allows l_len to be negative. (And if it is, the interval described by the lock covers bytes l_start + l_len up to and including l_start – 1.) However, for current kernels the Linux system call returns EINVAL in this situation. (We note that the manpage refers to the 2.4.x series of kernels; it may pay to check the current manpage if your system is newer.) Now that we know how to describe where in the file to lock, we can describe the type of the lock with l_type. The possible values are as follows: F_RDLCK
A read lock. The file must have been opened for reading to apply a read lock.
F_WRLCK
A write lock. The file must have been opened for writing to apply a write lock.
F_UNLCK
Release a previously held lock.
Thus, the complete specification of a lock involves setting a total of four fields in the struct flock structure: three to specify the byte range and the fourth to describe the desired lock type. The F_UNLCK value for l_type releases locks. In general, it’s easiest to release exactly the same locks that you acquired earlier, but it is possible to “split” a lock by releasing a range of bytes in the middle of a larger, previously locked range. For example: struct employee { /* whatever */ }; struct flock lock; ... /* Lock struct employees 6-8 */ lock.l_whence = SEEK_SET; lock.l_start = 5 * sizeof(struct employee); lock.l_len = sizeof(struct employee) * 3;
/* Describe an employee */ /* Lock structure */
/* Absolute position */ /* Start of 6th structure */ /* Lock three records */
… obtain lock (see next section) … /* Release record 7: this splits the previous lock into two: */ lock.l_whence = SEEK_SET; /* Absolute position */ lock.l_start = 6 * sizeof(struct employee); /* Start of 7th structure */ lock.l_len = sizeof(struct employee) * 1; /* Unlock one record */
… release lock (see next section) …
Chapter 14 • Extended Interfaces
536
14.2.2.2 Obtaining and Releasing Locks Once the struct flock has been filled in, the next step is to request the lock. This step is done with an appropriate value for the cmd argument to fcntl(): F_GETLK
Inquire if it’s possible to obtain a lock.
F_SETLK
Obtain or release a lock.
F_SETLKW
Obtain a lock, waiting until it’s available.
The F_GETLK command is the “Mother may I?” command. It inquires whether the lock described by the struct flock is available. If it is, the lock is not placed; instead, the operating system changes the l_type field to F_UNLCK. The other fields are left unchanged. If the lock is not available, then the operating system fills in the various fields with information describing an already held lock that blocks the requested lock from being obtained. In this case, l_pid contains the PID of the process holding the described lock.1 There’s not a lot to be done if the lock is being held, other than to wait awhile and try again to obtain the lock, or print an error message and give up. The F_SETLK command attempts to acquire the specified lock. If fcntl() returns 0, then the lock has been successfully acquired. If it returns -1, then another process holds a conflicting lock. In this case, errno is set to either EAGAIN (try again later) or EACCES (access denied). Two values are possible, to cater to historical systems. The F_SETLKW command also attempts to acquire the specified lock. It differs from F_SETLK in that it will wait until the lock becomes available. Once you’ve chosen the appropriate value for the cmd argument, pass it as the second argument to fcntl(), with a pointer to a filled-in struct flock as the third argument: struct flock lock; int fd;
… open file, fill in struct flock … if (fcntl(fd, F_SETLK, & lock) < 0) { /* Could not acquire lock, attempt to recover */ }
1 The GNU/Linux fcntl(3) manpage points out that this may not be enough information; the process could be
residing on another machine! There are other issues with locks held across a network; in general, using locks on filesystems mounted from remote computers is not a good idea.
14.2 Locking Files
537
The lockf() function2 provides an alternative way to acquire a lock at the current file position: #include
XSI
int lockf(int fd, int cmd, off_t len);
The file descriptor, fd, must have been opened for writing. len specifies the number of bytes to lock: from the current position (call it pos) to pos + len bytes if len is positive, or from pos - len to pos - 1 if len is negative. The commands are as follows: F_LOCK
Sets an exclusive lock on the range. The call blocks until the lock becomes available.
F_TLOCK
Tries the lock. This is like F_LOCK, but if the lock isn’t available, F_TLOCK returns an error.
F_ULOCK
Unlocks the indicated section. This can cause lock splitting, as described earlier.
F_TEST
Sees if the lock is available. If it is, returns 0 and acquires the lock. Otherwise, it returns -1 and sets errno to EACCES.
The return value is 0 on success and -1 on error, with errno set appropriately. Possible error returns include: EAGAIN
The file is locked, for F_TLOCK or F_TEST.
EDEADLK
For F_TLOCK, this operation would cause a deadlock.3
ENOLCK
The operating system is unable to allocate a lock.
The combination of F_TLOCK and EDEADLK is useful: If you know that there can never be potential for deadlock, then use F_LOCK. Otherwise, it pays to be safe and use F_TLOCK. If the lock is available, you’ll get it, but if it’s not, you have a chance to recover instead of blocking, possibly forever, waiting for the lock. When you’re done with a lock, you should release it. With fcntl(), take the original struct lock used to acquire the lock, and change the l_type field to F_UNLCK. Then use F_SETLK as the cmd argument:
2 On GNU/Linux, lockf() is implemented as a “wrapper” around fcntl(). 3 A deadlock is a situation in which two processes would both block, each waiting on the other to release a resource.
Chapter 14 • Extended Interfaces
538
lock.l_whence = ... ; /* lock.l_start = ... ; /* lock.l_len = ... ; /* lock.l_type = F_UNLCK; /* if (fcntl(fd, F_SETLK, & lock) /* handle error */ } /* Lock has been released */
As before */ As before */ As before */ Unlock */ < 0) {
Code using lockf() is a bit simpler. For brevity, we’ve omitted the error checking: off_t curpos, len; curpos = lseek(fd, (off_t) 0, SEEK_CUR); len = ... ;
Retrieve current position Set correct number of bytes to lock
lockf(fd, F_LOCK, len);
Acquire lock
… use locked area here … lseek(fd, curpos, SEEK_SET);
Return to position of lock
lockf(fd, F_ULOCK, len);
Unlock file
If you don’t explicitly release a lock, the operating system will do it for you in two cases. The first is when the process exits (either by main() returning or by the exit() function, which was covered in Section 9.1.5.1, “Defining Process Exit Status,” page 300). The other case is when you call close() on the file descriptor: more on this in the next section. 14.2.2.3 Observing Locking Caveats There are several caveats to be aware of when doing file locking: • As described previously, advisory locking is just that. An uncooperative process can do anything it wants behind the back (so to speak) of processes that are doing locking. • These calls should not be used in conjunction with the library. This library does its own buffering, and while you can retrieve the underlying file descriptor with fileno(), the actual position in the file may not be where you think it is. In general, the Standard I/O library doesn’t understand file locks. • Bear in mind that locks are not inherited by child processes after a fork but that they do remain in place after an exec.
14.2 Locking Files
539
• A close() of any file descriptor open on the file removes all of the process’s locks on a file, even if other file descriptors remain open on it. That close() works this way is unfortunate, but because this is how fcntl() locking was originally implemented, POSIX standardizes it. Making this behavior the standard avoids breaking existing Unix code.
14.2.3 BSD Locking: flock() 4.2 BSD Unix introduced its own file locking mechanism, flock().4 It is declared as follows: #include
Common
int flock(int fd, int operation);
The file descriptor fd represents the open file. These are the operations: LOCK_SH
Creates a shared lock. There can be multiple shared locks.
LOCK_EX
Creates an exclusive lock. There can be only one such lock.
LOCK_UN
Removes the previous lock.
LOCK_NB
When bitwise-OR’d with LOCK_SH or LOCK_EX avoids blocking if the lock isn’t available.
By default, the lock requests will block (not return) if a competing lock exists. Once the competing lock is removed and the requested lock is obtained, the call returns. (This implies that, by default, there is potential for deadlock.) To attempt to obtain a lock without blocking, perform a bitwise OR of LOCK_NB with one of the other values for operation. The salient points about flock() are as follows: • flock() locking is also advisory locking; a program that does no locking can come in and blast, with no errors, a file locked with flock(). • The whole file is locked. There is no mechanism for locking or unlocking just a part of the file.
4 It is fortunate that flock() is a different name from lockf(), since the semantics are different. It is also terribly
confusing. Keep your manual handy.
Chapter 14 • Extended Interfaces
540
• How the file was opened has no effect on the type of lock that may be placed. (Compare this to fcntl(), whereby the file must have been opened for reading for a read lock, or opened for writing for a write lock.) • Multiple file descriptors open on the same file share the lock. Any one of them can be used to remove the lock. Unlike fcntl(), when there is no explicit unlock, the lock is not removed until all open file descriptors for the file have been closed. • Only one flock() lock can be held by a process on a file; calling flock() successively with two different lock types changes the lock to the new type. • On GNU/Linux systems, flock() locks are completely independent of fcntl() locks. Many commercial Unix systems implement flock() as a “wrapper” on top of fcntl(), but the semantics are not the same. We don’t recommend using flock() in new programs, because the semantics are not as flexible and because the call is not standardized by POSIX. Support for it in GNU/Linux is primarily for compatibility with software written for older BSD Unix systems. NOTE The GNU/Linux flock(2) manpage warns that flock() locks do not work for remotely mounted files. fcntl() locks do, provided you have a recent enough version of Linux and an NFS server that supports locking.
14.2.4 Mandatory Locking Most commercial Unix systems support mandatory file locking, in addition to advisory file locking. Mandatory locking works only with fcntl() locks. Mandatory locking for a file is controlled by the file’s permission settings, in particular, by addition of the setgid bit to a file with the chmod command: $ echo hello, world > myfile $ ls -l myfile -rw-r--r-1 arnold devel $ chmod g+s myfile $ ls -l myfile -rw-r-Sr-1 arnold devel
Create file Show permissions 13 Apr
3 17:11 myfile
Add setgid bit Show new permissions 13 Apr
3 17:11 myfile
The group execute bit should be left turned off. The S shows that the setgid bit is turned on but that execute permission isn’t; an s would be used if both were on.
14.2 Locking Files
541
The combination of setgid on and group execute off is generally meaningless. For this reason, it was chosen by the System V developers to mean “enforce mandatory locking.” And indeed, adding this bit is enough to cause a commercial Unix system, such as Solaris, to enforce file locks. On GNU/Linux systems, the story is a little different. The setgid bit must be applied to a file for mandatory locking, but that alone is not enough. The filesystem containing the file must also be mounted with the mand option to the mount command. We have already covered filesystems, disk partitions, mounting, and the mount command, mostly in Section 8.1, “Mounting and Unmounting Filesystems,” page 228. We can demonstrate mandatory locking with a small program and a test filesystem on a floppy disk. First, here’s the program: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33
/* ch14-lockall.c --- Demonstrate mandatory locking. */ #include #include #include #include #include #include #include
/* /* /* /* /*
for fprintf(), stderr, BUFSIZ */ declare errno */ for flags for open() */ declare strerror() */ for ssize_t */
/* for mode_t */
int main(int argc, char **argv) { int fd; int i, j; mode_t rw_mode; static char message[] = "hello, world\n"; struct flock lock; if (argc != 2) { fprintf(stderr, "usage: %s file\n", argv[0]); exit(1); } rw_mode = S_IRUSR | S_IWUSR | S_IRGRP | S_IROTH; /* 0644 */ fd = open(argv[1], O_RDWR|O_TRUNC|O_CREAT|O_EXCL, rw_mode); if (fd < 0) { fprintf(stderr, "%s: %s: cannot open for read/write: %s\n", argv[0], argv[1], strerror(errno)); (void) close(fd); return 1; }
Chapter 14 • Extended Interfaces
542
34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69
if (write(fd, message, strlen(message)) != strlen(message)) { fprintf(stderr, "%s: %s: cannot write: %s\n", argv[0], argv[1], strerror(errno)); (void) close(fd); return 1; } rw_mode |= S_ISGID; /* add mandatory lock bit */ if (fchmod(fd, rw_mode) < 0) { fprintf(stderr, "%s: %s: cannot change mode to %o: %s\n", argv[0], argv[1], rw_mode, strerror(errno)); (void) close(fd); return 1; } /* lock the file */ memset(& lock, '\0', sizeof(lock)); lock.l_whence = SEEK_SET; lock.l_start = 0; lock.l_len = 0; /* whole-file lock */ lock.l_type = F_WRLCK; /* write lock */ if (fcntl(fd, F_SETLK, & lock) < 0) { fprintf(stderr, "%s: %s: cannot lock the file: %s\n", argv[0], argv[1], strerror(errno)); (void) close(fd); return 1; } pause(); (void) close(fd); return 0; }
The program sets the permissions and creates the file named on the command line (lines 25 and 26). It then writes some data into the file (line 34). Line 41 adds the setgid bit to the permissions, and line 43 changes them. (The fchmod() system call was discussed in Section 5.5.2, “Changing Permissions: chmod() and fchmod(),” page 156.) Lines 51–55 set up the struct flock to lock the whole file, and then the lock is actually requested on line 57. Once we have the lock, the program goes to sleep, using the pause() system call (see Section 10.7, “Signals for Interprocess Communication,” page 379). When done, the program closes the file descriptor and returns. Here is an annotated transcript demonstrating the use of mandatory file locking:
14.3 More Precise Times
543
$ fdformat /dev/fd0 Format floppy disk Double-sided, 80 tracks, 18 sec/track. Total capacity 1440 kB. Formatting ... done Verifying ... done $ /sbin/mke2fs /dev/fd0 Make a Linux filesystem
… lots of output, omitted … $ su Password: # mount -t ext2 -o mand /dev/fd0 /mnt/floppy # suspend
Become root, to use mount Password does not echo Mount floppy, with locking Suspend root shell
[1]+ Stopped su $ ch14-lockall /mnt/floppy/x & Background program [2] 23311 holds lock $ ls -l /mnt/floppy/x Look at file -rw-r-Sr-1 arnold devel 13 Apr 6 14:23 /mnt/floppy/x $ echo something > /mnt/floppy/x Try to modify file bash2: /mnt/floppy/x: Resource temporarily unavailable Error returned $ kill %2 Kill program holding lock $ Press ENTER [2]- Terminated ch14-lockall /mnt/floppy/x Program died $ echo something > /mnt/floppy/x Retry modification, works $ fg Return to root shell su # umount /mnt/floppy Unmount floppy # exit Done with root shell $
As long as ch14-lockall is running, it holds the lock. Since it’s a mandatory lock, the shell’s I/O redirection fails. Once ch14-lockall exits, the lock is released, and the I/O redirection succeeds. As mentioned earlier, under GNU/Linux, not even root can override a mandatory file lock. As an aside, floppy disks make excellent test beds for learning how to use tools that manipulate filesystems. If you do something that destroys the data on a floppy, it’s not likely to be catastrophic, whereas experimenting with live partitions on regular hard disks is much riskier.
14.3 More Precise Times The time() system call and time_t type represent times in seconds since the Epoch format. A resolution of one second really isn’t enough; today’s machines are fast, and it’s often useful to distinguish subsecond time intervals. Starting with 4.2 BSD, Berkeley Unix introduced a series of system calls that make it possible to retrieve and use subsecond times. These calls are available on all modern Unix systems, including GNU/Linux.
Chapter 14 • Extended Interfaces
544
14.3.1 Microsecond Times: gettimeofday() The first task is to retrieve the time of day: XSI
#include int gettimeofday(struct timeval *tv, void *tz);
POSIX definition, not GLIBC’s
gets the time of day.5 The return value is 0 on success, -1 for an error. The arguments are as follows: gettimeofday()
struct timeval *tv
This argument is a pointer to a struct timeval, described shortly, into which the system places the current time. void *tz
This argument is no longer used; thus, it’s of type void * and you should always pass NULL for it. (The manpage describes what it used to be and then proceeds to state that it’s obsolete. Read it if you’re interested in the details.) The time is represented by a struct timeval: struct timeval { long tv_sec; long tv_usec; };
/* seconds */ /* microseconds */
The tv_sec value represents seconds since the Epoch; tv_usec is the number of microseconds within the second. The GNU/Linux gettimeofday(2) manpage also documents the following macros: #define #define
timerisset(tvp) ((tvp)->tv_sec || (tvp)->tv_usec) timercmp(tvp, uvp, cmp) \ ((tvp)->tv_sec cmp (uvp)->tv_sec || \ (tvp)->tv_sec == (uvp)->tv_sec && \ (tvp)->tv_usec cmp (uvp)->tv_usec)
#define timerclear(tvp) ((tvp)->tv_sec = (tvp)->tv_usec = 0)
These macros work on struct timeval * values; that is, pointers to structures, and their use should be obvious both from their names and the code. The timercmp() macro is particularly interesting: The third argument is a comparison operator to indicate
5 The gettimeofday(2) manpage documents a corresponding settimeofday() function, for use by the superuser (root) to set the time of day for the whole system.
14.3 More Precise Times
545
the kind of comparison. For example, consider the determination of whether one struct timeval is less than another: struct timeval t1, t2; ... if (timercmp(& t1, & t2, tv_sec < (& t2)->tv_sec || \ (& t1)->tv_sec == (& t2)->tv_sec && \ (& t1)->tv_usec < (& t2)->tv_usec)
This says “if t1.tv_sec is less than t2.tv_sec, OR if they are equal and t1.tv_usec is less than t2.tv_usec, then ….”
14.3.2 Microsecond File Times: utimes() Section 5.5.3, “Changing Timestamps: utime(),” page 157, describes the utime() system call for setting the access and modification times of a given file. Some filesystems store these times with microsecond (or greater) resolution. Such systems provide the utimes() system call (note the trailing s in the name) to set the access and modification times with microsecond values: #include
XSI
int utimes(char *filename, struct timeval tvp[2]);
The tvp argument should point to an array of two struct timeval structures; the values are used for the access and modification times, respectively. If tvp is NULL, then the system uses the current time of day. POSIX marks this as a “legacy” function, meaning that it’s standardized only to support old code and should not be used for new applications. The primary reason seems to be that there is no defined interface for retrieving file access and modification times that includes the microseconds value; the struct stat contains only time_t values, not struct timeval values. However, as mentioned in Section 5.4.3, “Linux Only: Specifying Higher-Precision File Times,” page 143, Linux 2.6 (and later) does provide access to nanosecond resolution timestamps with the stat() call. Some other systems (such as Solaris) do
Chapter 14 • Extended Interfaces
546
as well.6 Thus, utimes() is more useful than it first appears, and despite its official “legacy” status, there’s no reason not to use it in your programs.
14.3.3 Interval Timers: setitimer() and getitimer() The alarm() function (see Section 10.8.1, “Alarm Clocks: sleep(), alarm(), and SIGALRM,” page 382) arranges to send SIGALRM after the given number of seconds has passed. Its smallest resolution is one second. Here too, 4.2 BSD introduced a function and three different timers that accept subsecond times. An interval timer is like a repeating alarm clock. You set the first time it should “go off” as well as how frequently after that the timer should repeat. Both of these values use struct timeval objects; that is, they (potentially) have microsecond resolution. The timer “goes off” by delivering a signal; thus, you have to install a signal handler for the timer, preferably before setting the timer itself. Three different timers exist, as described in Table 14.2. TABLE 14.2 Interval timers
Timer
Signal
Function
ITIMER_REAL
SIGALRM
Runs in real time.
ITIMER_VIRTUAL
SIGVTALRM Runs when a process is executing in user mode.
ITIMER_PROF
SIGPROF
Runs when a process is in either user or system mode.
The use of the first timer, ITIMER_REAL, is straightforward. The timer runs down in real time, sending SIGALRM after the given amount of time has passed. (Because SIGALRM is sent, you cannot mix calls to setitimer() with calls to alarm(), and mixing them with calls to sleep() is also dangerous; see Section 10.8.1, “Alarm Clocks: sleep(), alarm(), and SIGALRM,” page 382.) The second timer, ITIMER_VIRTUAL, is also fairly straightforward. It runs down when the process is running, but only in user-level (application) code. If a process is
6 Unfortunately, there seems to be no current standard for the names of the members in the struct stat,
making it an unportable operation.
14.3 More Precise Times
547
blocked doing I/O, such as to a disk or, more importantly, to a terminal, the timer is suspended. The third timer, ITIMER_PROF, is more specialized. It runs down whenever the process is running, even if the operating system is doing something on behalf of the process (such as I/O). According to the POSIX standard, it is “designed to be used by interpreters in statistically profiling the execution of interpreted programs.” By setting both ITIMER_VIRTUAL and ITIMER_PROF to identical intervals and computing the difference between the times when the two timers go off, an interpreter can tell how much time it’s spending in system calls on behalf of the executing interpreted program.7 (As stated, this is quite specialized.) The two system calls are: #include
XSI
int getitimer(int which, struct itimerval *value); int setitimer(int which, const struct itimerval *value, struct itimerval *ovalue);
The which argument is one of the symbolic constants listed earlier naming a timer. getitimer() fills in the struct itimerval pointed to by value with the given timer’s current settings. setitimer() sets the given timer with the value in value. If ovalue is provided, the function fills it in with the timer’s current value. Use an ovalue of NULL if you don’t care about the current value. Both functions return 0 on success or -1 on error. A struct itimerval consists of two struct timeval members: struct itimerval { struct timeval it_interval; /* next value */ struct timeval it_value; /* current value */ };
Application programs should not expect timers to be exact to the microsecond. The getitimer(2) manpage provides this explanation: Timers will never expire before the requested time, instead expiring some short, constant time afterwards, dependent on the system timer resolution (currently 10ms). Upon expiration, a signal will be generated and the timer reset. If the timer expires while the process is active (always true for ITIMER_VIRT) the signal will be delivered immediately when generated. 7 Doing profiling correctly is nontrivial; if you’re thinking about writing an interpreter, it pays to do your
research first.
Chapter 14 • Extended Interfaces
548
Otherwise the delivery will be offset by a small time dependent on the system loading. Of the three timers, ITIMER_REAL seems most useful. The following program, shows how to read data from a terminal, but with a timeout so that the program won’t hang forever waiting for input: ch14-timers.c, 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
/* ch14-timers.c ---- demonstrate interval timers */ #include #include #include #include
/* handler --- handle SIGALRM */ void handler(int signo) { static const char msg[] = "\n*** Timer expired, you lose ***\n"; assert(signo == SIGALRM); write(2, msg, sizeof(msg) - 1); exit(1); } /* main --- set up timer, read data with timeout */ int main(void) { struct itimerval tval; char string[BUFSIZ]; timerclear(& tval.it_interval); /* zero interval means no reset of timer */ timerclear(& tval.it_value); tval.it_value.tv_sec = 10;
/* 10 second timeout */
(void) signal(SIGALRM, handler); printf("You have ten seconds to enter\nyour name, rank, and serial number: "); (void) setitimer(ITIMER_REAL, & tval, NULL);
14.3 More Precise Times
37 38 39 40 41 42 43 44 45
549
if (fgets(string, sizeof string, stdin) != NULL) { (void) setitimer(ITIMER_REAL, NULL, NULL); /* turn off timer */ /* process rest of data, diagnostic print for illustration */ printf("I'm glad you are being cooperative.\n"); } else printf("\nEOF, eh? We won't give up so easily!\n"); exit(0); }
Lines 10–18 are the signal handler for SIGALRM; the assert() call makes sure that the signal handler was set up properly. The body of the handler prints a message and exits, but it could do anything appropriate for a larger-scale program. In the main() program, lines 27–28 clear out the two struct timeval members of the struct itimerval structure, tval. Then line 30 sets the timeout to 10 seconds. Having tval.it_interval set to 0 means there is no repeated alarm; it only goes off once. Line 32 sets the signal handler, and line 34 prints the prompt. Line 36 sets the timer, and lines 37–42 print appropriate messages based on the user’s action. A real program would do its work at this point. What’s important to note is line 38, which cancels the timer because valid data was entered. NOTE There is a deliberate race condition between lines 37 and 38. The whole point is that if the user doesn’t enter a line within the timer’s expiration period, the signal will be delivered and the signal handler will print the “you lose” message.
Here are three successive runs of the program: $ ch14-timers You have ten seconds to enter your name, rank, and serial number: *** Timer expired, you lose ***
First run, enter nothing
$ ch14-timers Second run, enter data You have ten seconds to enter your name, rank, and serial number: James Kirk, Starfleet Captain, 1234 I'm glad you are being cooperative. $ ch14-timers You have ten seconds to enter your name, rank, and serial number: ^D EOF, eh? We won't give up so easily!
Third run, enter EOF (^D)
Chapter 14 • Extended Interfaces
550
POSIX leaves it undefined as to how the interval timers interact with the sleep() function, if at all. GLIBC does not use alarm() to implement sleep(), so on GNU/Linux systems, sleep() does not interact with the interval timer. However, for portable programs, you cannot make this assumption.
14.3.4 More Exact Pauses: nanosleep() The sleep() function (see Section 10.8.1, “Alarm Clocks: sleep(), alarm(), and SIGALRM,” page 382) lets a program sleep for a given number of seconds. But as we saw, it only took an integral number of seconds, making it impossible to delay for a short period, and it also can potentially interact with SIGALRM handlers. The nanosleep() function makes up for these deficiencies: POSIX TMR
#include int nanosleep(const struct timespec *req, struct timespec *rem);
This function is part of the optional “Timers” (TMR) extension to POSIX. The two arguments are the requested sleep time and the amount of time remaining should the sleep return early (if rem is not NULL). Both of these are struct timespec values: struct timespec { time_t tv_sec; long tv_nsec; };
/* seconds */ /* nanoseconds */
The tv_nsec value must be in the range 0–999,999,999. As with sleep(), the amount of time slept can be more than the requested amount of time, depending on when and how the kernel schedules processes for execution. Unlike sleep(), nanosleep() has no interactions with any signals, making it generally safer and easier to use. The return value is 0 if the process slept for the full time. Otherwise, it is -1, with indicating the error. In particular, if errno is EINTR, then nanosleep() was interrupted by a signal. In this case, if rem is not NULL, the struct timespec it points to is filled in with the remaining sleep time. This facilitates calling nanosleep() again to continue napping. errno
Although it looks a little strange, it’s perfectly OK to use the same structure for both parameters:
14.4 Advanced Searching with Binary Trees
551
struct timespec sleeptime = /* whatever */ ; int ret; ret = nanosleep(& sleeptime, & sleeptime);
The struct timeval and struct timespec are similar to each other, differing only in the units of the second component. The GLIBC header file defines two useful macros for converting between them: GLIBC
#include
void TIMEVAL_TO_TIMESPEC(struct timeval *tv, struct timespec *ts); void TIMEPSEC_TO_TIMEVAL(struct timespec *ts, struct timeval *tv);
Here they are: # define TIMEVAL_TO_TIMESPEC(tv, ts) { (ts)->tv_sec = (tv)->tv_sec; (ts)->tv_nsec = (tv)->tv_usec * 1000; } # define TIMESPEC_TO_TIMEVAL(tv, ts) { (tv)->tv_sec = (ts)->tv_sec; (tv)->tv_usec = (ts)->tv_nsec / 1000; } #endif
\ \ \ \ \ \
NOTE It is indeed confusing that some system calls use microsecond resolution and others use nanosecond resolution. This reason is historical: The microsecond calls were developed on systems whose hardware clocks did not have any higher resolution, whereas the nanosecond calls were developed more recently, for systems with much higher resolution clocks. C’est la vie. About all you can do is to keep your manual handy.
14.4 Advanced Searching with Binary Trees In Section 6.2, “Sorting and Searching Functions,” page 181, we presented functions for searching and sorting arrays. In this section, we cover a more advanced facility.
14.4.1 Introduction to Binary Trees Arrays are about the simplest kind of structured data. They are easy to understand and use. They have a disadvantage, though, which is that their size is fixed at compile time. Thus, if you have more data than will fit in the array, you’re out of luck. If you have considerably less data than the size of your array, you’re wasting memory. (Although
552
Chapter 14 • Extended Interfaces
modern systems have large memories, consider the constraints of programmers writing software for embedded systems, such as microwave ovens or cell phones. On the other end of the spectrum, consider the problems of programmers dealing with very large amounts of inputs, such as weather simulations.) The computer science field has invented numerous dynamic data structures, structures that grow and shrink in size on demand, that are more flexible than simple arrays, even arrays created and resized dynamically with malloc() and realloc(). Arrays also require re-sorting should new elements be added or removed. One such structure is the binary search tree, which we’ll just call a “binary tree” for short. A binary tree maintains items in sorted order, inserting them in the proper place in the tree as they come in. Lookup in a binary tree is also fast, similar in time to binary search on an array. Unlike arrays, binary trees do not have to be re-sorted from scratch every time you add an item. Binary trees have one disadvantage. In the case in which the input data is already sorted, the lookup time of binary trees reduces to that of linear searching. The technicalities of this have to do with how binary trees are managed internally, described shortly. Some more formal data-structure terminology is now unavoidable. Figure 14.1 shows a binary tree. In computer science, trees are drawn starting at the top and growing downwards. The further down the tree you go, the higher depth you have. Each object within the tree is termed a node. At the top of the tree is the root node, with depth 0. At the bottom are the leaf nodes, with varying depth. In between the root and the leaves are zero or more internal nodes. Leaf nodes are distinguished by the fact that they have no subtrees hanging off them, whereas internal nodes have at least one subtree. Nodes with subtrees are sometimes referred to as parent nodes, with the subnodes being called children. Plain binary trees are distinguished by the fact that nodes have no more than two children. (Trees with more than two nodes are useful but aren’t relevant here.) The children are referred to as the left and right children, respectively. Binary search trees are further distinguished by the fact that the values stored in a left subchild are always less than the value stored in the node itself, and the values stored in the right subchild are always greater than the value in the node itself. This implies that there are no duplicate values within the tree. This fact also explains why trees don’t
14.4 Advanced Searching with Binary Trees
Depth 0
553
Root node
5
Depth 1
3
9 Internal and leaf nodes
Depth 2
Depth 3
2
7
4
1
6
10
8
Leaf nodes
FIGURE 14.1 A binary tree
handle presorted data well: Depending on the sort order, each new data item ends up stored either to the left or to the right of the one before it, forming a simple linear list. The operations on a binary tree are as follows: Insertion Adding a new item to the tree. Lookup Finding an item in the tree. Removal Removing an item from the tree. Traversal Doing something with every item that has been stored in the tree. Tree traversals are also referred to as tree walks. There are multiple ways to “visit” the items stored in a tree. The functions discussed here only implement one such way. We have more to say about this, later.
554
Chapter 14 • Extended Interfaces
14.4.2 Tree Management Functions The operations just described correspond to the following functions: XSI
#include void *tsearch(const void *key, void **rootp, int (*compare)(const void *, const void *)); void *tfind(const void *key, const void **rootp, int (*compare)(const void *, const void *)); void *tdelete(const void *key, void **rootp, int (*compare)(const void *, const void *));
typedef enum { preorder, postorder, endorder, leaf } VISIT; void twalk(const void *root, void (*action)(const void *nodep, const VISIT which, const int depth)); void tdestroy(void *root, void (*free_node)(void *nodep));
GLIBC
These functions were first defined for System V and are now formally standardized by POSIX. They follow the pattern of the others we saw in Section 6.2, “Sorting and Searching Functions,” page 181: using void * pointers for pointing at arbitrary data types, and user-provided comparison functions to determine ordering. As for qsort() and bsearch(), the comparison function must return a negative/zero/positive value when the key is compared with a value in a tree node.
14.4.3 Tree Insertion: tsearch() These routines allocate storage for the tree nodes. To use them with multiple trees, you have to give them a pointer to a void * variable which they fill in with the address of the root node. When creating a new tree, initialize this pointer to NULL: void *root = NULL; void *val;
Root of new tree Pointer to returned data
extern int my_compare(const void *, const void *); extern char key[], key2[];
Comparison function Values to insert in tree
val = tsearch(key, & root, my_compare);
Insert first item in tree
… fill key2 with a different value. DON’T modify root … val = tsearch(key2, & root, my_compare);
Insert subsequent item in tree
As shown, the root variable should be set to NULL only the first time and then left alone after that. On each subsequent call, tsearch() uses it to manage the tree. When the key being sought is found, both tsearch() and tfind() return pointers to the node containing it. They differ when the key is not found: tfind() returns
14.4 Advanced Searching with Binary Trees
555
NULL,
and tsearch() inserts the new value into the tree and returns a pointer to it. The pointers returned by tsearch() and tfind() are to the internal tree nodes. They can be used as the value of root in subsequent calls in order to work on subtrees. As we will see shortly, the key value can be a pointer to an arbitrary structure; it’s not restricted to a character string as the previous example might imply. These routines store only pointers to the data used for keys. Thus, it is up to you to manage the storage holding the data values, usually with malloc(). NOTE
Since the tree functions keep pointers, be extra careful not to use
realloc() for values that have been used as keys! realloc() could move
the data around, returning a new pointer, but the tree routines would still be maintaining dangling pointers into the old data.
14.4.4 Tree Lookup and Use of A Returned Pointer: tfind() and tsearch() The tfind() and tsearch() functions search a binary tree for a given key. They take the same list of arguments: a key to search for; a pointer to the root of the tree, rootp; and compare, a pointer to a comparison function. Both functions return a pointer to the node that matches key. Just how do you use the pointer returned by tfind() or tsearch()? What exactly does it point to, anyway? The answer is that it points to a node in the tree. This is an internal type; you can’t see how it’s defined. However, POSIX guarantees that this pointer can be cast to a pointer to a pointer to whatever you’re using for a key. Here is some fragmentary code to demonstrate, and then we show how this works: struct employee { char lastname[30]; char firstname[30]; long emp_id; time_t start_date; };
From Chapter 6
/* emp_name_id_compare --- compare by name, then by ID */ int emp_name_id_compare(const void *e1p, const void *e2p) {
… also from Chapter 6, reproduced in full later on … } struct employee key = { ... };
Chapter 14 • Extended Interfaces
556
void *vp, *root; struct employee *e;
… fill tree with data … vp = tfind(& key, root, emp_name_id_compare); if (vp != NULL) { /* it's there, use it */ e = *((struct employee **) vp); /* use info in *e ... */ }
Retrieve stored data from tree
How can a pointer to a node double as a pointer to a pointer to the data? Well, consider how a binary tree’s node would be implemented. Each node maintains at least a pointer to the user’s data item and pointers to potential left and right subchildren. So, it has to look approximately like this: struct binary_tree { void *user_data; struct binary_tree *left; struct binary_tree *right;
Pointer to user’s data Left subchild or NULL Right subchild or NULL
… possibly other fields here … } node;
C and C++ guarantee that fields within a struct are laid out in increasing address order. Thus, it’s true that ‘& node.left < & node.right’. Furthermore, the address of the struct is also the address of its first field (in other words, ignoring type issues, ‘& node == & node.user_data’). Conceptually, then, here’s what ‘e = *((struct employee **) vp);’ means: 1. vp is a void *, that is, a generic pointer. It is the address of the internal tree node, but it’s also the address of the part of the node (most likely another void *) that points to the user’s data. 2. ‘(struct employee **) vp’ casts the address of the internal pointer to the correct type; it remains a pointer to a pointer, but now to a struct employee. Remember that casts from one pointer type to another don’t change any values (bit patterns); they only change how the compiler treats the values for type considerations. 3. ‘*((struct employee **) vp)’ indirects through the newly minted struct employee **, returning a usable struct employee * pointer. 4. ‘e = *((struct employee **) vp)’ stores this value in e for direct use later. The concept is illustrated in Figure 14.2.
14.4 Advanced Searching with Binary Trees
vp
557
user_data
lastname
left
firstname
right
emp_id startdate
void **
struct binary_tree
struct employee
FIGURE 14.2 Tree nodes and their pointers
You might consider defining a macro to simplify the use of the returned pointer: #define tree_data(ptr, type) (*(type **) (ptr)) ... struct employee *e; void *vp; vp = tfind(& key, root, emp_name_id_compare); if (vp != NULL) { /* it's there, use it */ e = tree_data(vp, struct employee); /* use info in *e ... */ }
14.4.5 Tree Traversal: twalk() The twalk() function is declared as follows in : typedef enum { preorder, postorder, endorder, leaf } VISIT; void twalk(const void *root, void (*action)(const void *nodep, const VISIT which, const int depth));
The first parameter is the root of the tree (not a pointer to the root). The second is a pointer to a callback function, which is called with three arguments: a pointer to the tree node being visited, an enumerated type indicating how the given node is being visited, and an integer indicating the depth of the current node (the root is at depth 0, as explained earlier). The use here of a callback function is the same as for nftw() (see Section 8.4.3.2, “The nftw() Callback Function,” page 263). There, the callback function is called for each object in the filesystem. Here, the callback function is called for each object stored in the tree. There are several ways to traverse, or “walk,” a binary tree: • Left child, node itself, right child.
Chapter 14 • Extended Interfaces
558
• Node itself, left child, right child. • Left child, right child, node itself. The GLIBC twalk() function uses the second of these: the node first, then the left child, then the right child. Each time a node is encountered, the node is said to be visited.8 In the course of visiting a node’s child, the function must visit the node itself. Thus the values of type VISIT indicate at what stage this node is being encountered: preorder
Before visiting any children.
postorder
After visiting the first child but before visiting the second child.
endorder
After visiting both children.
leaf
This node is a leaf node, without children. NOTE The terminology used here does not exactly match that used in formal data-structures texts. There, the terms used are inorder, preorder, and postorder, referring respectively to the three ways listed earlier for traversing a tree. Thus, twalk() uses a preorder traversal, but uses the preorder, etc., symbolic constants to indicate at what stage a node is being visited. This can be confusing.
The following program, ch14-tsearch.c, demonstrates building and traversing a tree. It reuses the struct employee structure and emp_name_id_compare() function from Section 6.2, “Sorting and Searching Functions,” page 181. 1 2 3 4 5 6 7 8 9 10 11 12 13
/* ch14-tsearch.c --- demonstrate tree management */ #include #include #include struct employee { char lastname[30]; char firstname[30]; long emp_id; time_t start_date; };
8 Images come to mind of little binary data structures sitting down with each other over tea and cookies. Or at
least that happens if you’ve been spending too much time in front of your computer…
14.4 Advanced Searching with Binary Trees
14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56
559
/* emp_name_id_compare --- compare by name, then by ID */ int emp_name_id_compare(const void *e1p, const void *e2p) { const struct employee *e1, *e2; int last, first; e1 = (const struct employee *) e1p; e2 = (const struct employee *) e2p; if ((last = strcmp(e1->lastname, e2->lastname)) != 0) return last; /* same last name, check first name */ if ((first = strcmp(e1->firstname, e2->firstname)) != 0) return first; /* same first name, check ID numbers */ if (e1->emp_id < e2->emp_id) return -1; else if (e1->emp_id == e2->emp_id) return 0; else return 1; } /* print_emp --- print an employee structure during a tree walk */ void print_emp(const void *nodep, const VISIT which, const int depth) { struct employee *e = *((struct employee **) nodep); switch (which) { case leaf: case postorder: printf("Depth: %d. Employee:\n", depth); printf("\t%s, %s\t%d\t%s\n", e->lastname, e->firstname, e->emp_id, ctime(& e->start_date)); break; default: break; } }
Lines 7–12 define the struct emp_name_id_compare().
employee,
and lines 14–38 define
Lines 40–56 define print_emp(), the callback function that prints a struct along with the depth in the tree of the current node. Note the magic cast on line 44 to retrieve the pointer to the stored data.
employee,
Chapter 14 • Extended Interfaces
560
58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83
/* main --- demonstrate maintaining data in binary tree */ int main(void) { #define NPRES 10 struct employee presidents[NPRES]; int i, npres; char buf[BUFSIZ]; void *root = NULL; /* Very simple code to read data: */ for (npres = 0; npres < NPRES && fgets(buf, BUFSIZ, stdin) != NULL; npres++) { sscanf(buf, "%s %s %ld %ld\n", presidents[npres].lastname, presidents[npres].firstname, & presidents[npres].emp_id, & presidents[npres].start_date); } for (i = 0; i < npres; i++) (void) tsearch(& presidents[i], & root, emp_name_id_compare); twalk(root, print_emp); return 0; }
The goal of printing the tree is to print the contained elements in sorted order. Remember that twalk() visits intermediate nodes three times and that the left child is less than the node itself, while the right child is greater than the node. Thus, the switch statement prints the node’s information only if which is leaf, for a leaf node, or postorder, indicating that the left child has been visited, but not yet the right child. The data used is the list of presidents, also from Section 6.2, “Sorting and Searching Functions,” page 181. To refresh your memory, the fields are last name, first name, employee number, and start time as a seconds-since-the-Epoch timestamp: $ cat presdata.txt Bush George 43 980013600 Clinton William 42 727552800 Bush George 41 601322400 Reagan Ronald 40 348861600 Carter James 39 222631200
The data are sorted based on last name, then first name, and then seniority. When run,9 the program produces this output:
9 This output is for the U.S. Eastern Time zone.
14.4 Advanced Searching with Binary Trees
561
$ ch14-tsearch < presdata.txt Depth: 1. Employee: Bush, George 41
Fri Jan 20 13:00:00 1989
Depth: 0. Employee: Bush, George
43
Sat Jan 20 13:00:00 2001
Depth: 2. Employee: Carter, James
39
Thu Jan 20 13:00:00 1977
Depth: 1. Employee: Clinton, William
42
Depth: 2. Employee: Reagan, Ronald
Tue Jan 20 13:00:00 1981
40
Wed Jan 20 13:00:00 1993
14.4.6 Tree Node Removal and Tree Deletion: tdelete() and tdestroy() Finally, you can remove items from a tree and, on GLIBC systems, delete the entire tree itself: void *tdelete(const void *key, void **rootp, int (*compare)(const void *, const void *)); /* GLIBC extension, not in POSIX: */ void tdestroy (void *root, void (*free_node)(void *nodep));
The arguments to tdelete() are the same as for tsearch(): the key, the address of the tree’s root, and the comparison function. If the given item is found in the tree, it is removed and tdelete() returns a pointer to the parent of the given node. Otherwise, it returns NULL. This behavior has to be managed carefully in your code if you need the original item being deleted, for example, to free its storage: struct employee *e, key; void *vp, *root;
Variable declarations
… fill in key for item to remove from tree … vp = tfind(& key, root, emp_name_id_compare); if (vp != NULL) { e = *((struct employee **) vp); free(e); } (void) tdelete(& key, & root, emp_name_id_compare);
Find item to remove Convert pointer Free storage Now remove it from tree
Although not specified in the manpages or in the POSIX standard, under GNU/Linux, if you delete the item stored in the root node, the returned value is that of the new root node. For portable code, you should not necessarily rely on this behavior.
562
Chapter 14 • Extended Interfaces
The tdestroy() function is a GLIBC extension. It allows you to destroy a whole tree. The first argument is the root of the tree. The second is a pointer to a function that releases the data pointed to by each node in the tree. If nothing needs to be done with these data (for example, they’re held in a regular array, as in our earlier example program), then this function should do nothing. Do not pass in a NULL pointer! Doing so results in a crash.
14.5 Summary • Occasionally, it’s necessary to allocate memory aligned at a certain boundary. posix_memalign() does this. Its return value is different from that of most of the functions covered in this book: Caveat emptor. memalign() also allocates aligned memory, but not all systems support releasing that memory with free(). • File locking with fcntl() provides record locks, down to the level of the ability to lock single bytes within a file. Read locks prevent writing of the locked area, and a write lock prevents other processes from reading and writing the locked area. Locking is advisory by default, and POSIX standardizes only advisory locking. Most modern Unix systems support mandatory locking, using the setgid permission bit on the file and possibly additional filesystem mount options. • On GNU/Linux, the lockf() function acts as a wrapper around POSIX locking with fcntl(); the BSD flock() function’s locks are (on GNU/Linux) completely independent of fcntl() locks. BSD flock() locks are whole-file locks only and do not work on remote filesystems. For these reasons, flock() locks are not recommended. • gettimeofday() retrieves the time of day as a (seconds, microseconds) pair in a struct timeval. These values are used by utimes() to update a file’s accessed and modification times. The getitimer() and setitimer() system calls use pairs of struct timevals in a struct itimerval to create interval timers—alarm clocks that “go off” at a set time and continue to go off at a set interval thereafter. Three different timers provide control over the states in which the timer continues to run down. • The nanosleep() function uses a struct timespec, which specifies time in seconds and nanoseconds, to pause a process for a given amount of time. It has the happy trait of not interacting at all with the signal mechanism.
14.6 Exercises
563
• The tree API is an additional set of data storage and search functions that maintains data in binary trees, the effect of which is to keep data sorted. The tree API is very flexible, allowing use of multiple trees and arbitrary data.
Exercises 1. Write the lockf() function, using fcntl() to do the locking. 2. The directory /usr/src/linux/Documentation contains a number of files that describe different aspects of the operating system’s behavior. Read the files locks.txt and mandatory.txt for more information about Linux’s handling of file locks. 3. Run the ch14-lockall program on your system, without mandatory locking, and see if you can change the operand file. 4. If you have a non-Linux system that supports mandatory locking, try the ch14-lockall program on it. 5. Write a function named strftimes(), with the following API: size_t strftimes(char *buf, size_t size, const char *format, const struct timeval *tp);
It should behave like the standard strftime() function, except that it allows %q to mean “the current number of microseconds.” 6. Using the strftimes() function you just wrote, write an enhanced version of date that accepts a format string beginning with a leading + and formats the current date and time. (See date(1).) 7. The handling of the timeout in ch14-timers.c is rather primitive. Rewrite the program to use setjmp() after printing the prompt and longjmp() from within the signal handler. Does this improve the structure or clarity of the program? 8. We noted that ch14-timers.c contains a deliberate race condition. Suppose the user enters a response within the right time period, but ch14-timers is suspended before the alarm can be canceled. What call can you make to cut down the size of the problem window? 9. Draw the tree as shown by the output of ch14-tsearch in Section 14.4.5, “Tree Traversal: twalk(),” page 557.
564
Chapter 14 • Extended Interfaces
10. Examine the file /usr/share/dict/words on a GNU/Linux system. (This is the spelling dictionary for spell; it might be in a different place on different systems.) The words exist in the file, one per line, in sorted order. First, use this awk program to create a new list, in random order: $ awk '{ list[$0]++ } > END { for (i in list) print i }' /usr/share/dict/words > /tmp/wlist
Next, write two programs. Each should read the new list and store each word read into a tree and an array, respectively. The second program should use qsort() to sort the array and bsearch() to search it. Retrieve the word ‘gravy’ from the tree or array. Time the two programs to see which one runs faster. You may need to put the word retrieval inside a loop that runs multiple times (say 1,000), to get running times that are long enough to show a difference. Use the output of ps to see how much memory each program uses. 11. Rerun the two programs, using the original sorted dictionary file, and see how, if at all, the timing results change.
III Debugging and Final Project
Chapter 15 Debugging
page 567
Chapter 16 A Project That Ties Everything Together
page 641
565
This page intentionally left blank
15 Debugging
In this chapter 15.1 First Things First
page 568
15.2 Compilation for Debugging
page 569
15.3 GDB Basics
page 570
15.4 Programming for Debugging
page 577
15.5 Debugging Tools
page 605
15.6 Software Testing
page 632
15.7 Debugging Rules
page 633
15.8 Suggested Reading
page 637
15.9 Summary
page 638
Exercises
page 639
567
here are many practices, starting with program logic and data design, through code breakdown and organization, and finally implementation, that can help minimize errors and problems. We encourage you to study these; find good books on software design and software engineering, and put their advice into practice! Any program over a few hundred lines in size should be carefully thought out and designed, instead of just hacked on until it appears to work.
T
However, since programmers are human, programming errors are unavoidable. Debugging is the process of tracking down and removing errors in programs. Even well-designed, well-implemented programs occasionally don’t work; when something’s going wrong and you can’t figure out why, it’s a good idea to point a debugger at the code, and watch it fail. This chapter covers a range of topics, starting off with basic debugging advice and techniques (compiling for debugging and elementary use of GDB, the GNU debugger), moving on to a range of techniques for use during program development and debugging that make debugging easier, and then looking at a number of tools that help the debugging process. It then closes with a brief introduction to software testing, and a wonderful set of “debugging rules,” extracted from a book that we highly recommend. Most of our advice is based on our long-term experience as a volunteer for the GNU project, maintaining gawk (GNU awk). Most, if not all, the specific examples we present come from that program. Throughout the chapter, specific recommendations are marked Recommendation.
15.1 First Things First When a program misbehaves, you may be at a loss as to what to do first. Often, strange behavior is due to misusing memory—using uninitialized values, reading or writing outside the bounds of dynamic memory, and so on. Therefore, you may get faster results by trying out a memory-debugging tool before you crank up a debugger. The reason is that memory tools can point you directly at the failing line of code, whereas using a debugger is more like embarking on a search-and-destroy mission, in
568
15.2 Compilation for Debugging
569
which you first have to isolate the problem and then fix it. Once you’re sure that memory problems aren’t the issue, you can proceed to using a debugger. Because the debugger is a more general tool, we cover it first. We discuss a number of memory-debugging tools later in the chapter.
15.2 Compilation for Debugging For a source code debugger to be used, the executable being debugged (the debuggee, if you will) must be compiled with the compiler’s -g option. This option causes the compiler to emit extra debugging symbols into the object code; that is, extra information giving the names and types of variables, constants, functions, and so on. The debugger then uses this information to match source code locations with the code being executed and to retrieve or store variable values in the running program. On many Unix systems, the -g compiler option is mutually exclusive with the -O option, which turns on optimizations. This is because optimizations can cause rearrangement of bits and pieces of the object code, such that there is no longer a direct relationship between what’s being executed and a linear reading of the source code. By disabling optimizations, you make it much easier for the debugger to relate the object code to the source code, and in turn, single-stepping through a program’s execution works in the obvious way. (Single-stepping is described shortly.) GCC, the GNU Compiler Collection, does allow -g and -O together. However, this introduces exactly the problem we wish to avoid when debugging: that following the execution in a debugger becomes considerably more difficult. The advantage of allowing the two together is that you can leave the debugging symbols in an optimized, forproduction-use executable. They occupy only disk space, not memory. Then, an installed executable can still be debugged in an emergency. In our experience, if you need to use a debugger, it’s better to recompile the application from scratch, with only the -g option. This makes tracing considerably easier; there’s enough detail to keep track of just going through the program as it’s written, without also having to worry about how the compiler rearranged the code. There is one caveat: Be sure the program still misbehaves. Reproducibility is the key to debugging; if you can’t reproduce the problem, it’s much harder to track it down
570
Chapter 15 • Debugging
and fix it. Rarely, compiling a program without -O can cause it to stop failing.1 Typically, the problem persists when compiled without -O, meaning there is indeed a logic bug of some kind, waiting to be discovered.
15.3 GDB Basics A debugger is a program that allows you to control the execution of another program and examine and change the subordinate program’s state (such as variable values). There are two kinds of debuggers: machine-level debuggers, which work on the level of machine instructions, and source-level debuggers, which work in terms of the program’s source code. For example, in a machine-level debugger, to change a variable’s value, you specify its address in memory. In a source-level debugger, you just use the variable’s name. Historically, V7 Unix had adb, which was a machine-level debugger. System III had which was a source-level debugger, and BSD Unix provided dbx, also a sourcelevel debugger. (Both continued to provide adb.) dbx survives on some commercial Unix systems.
sdb,
GDB, the GNU Debugger, is a source-level debugger. It has many more features, is more broadly portable, and is more usable than either sdb or dbx.2 Like its predecessors, GDB is a command-line debugger. It prints one line of source code at a time, prints a prompt, and reads one line of input containing a command to execute. There are graphical debuggers; these provide a larger view of the source code and usually provide the ability to manipulate the program both through a command-line window and through GUI components such as buttons and menus. The ddd debugger3 is one such; it is built on top of GDB, so if you learn GDB, you can make some use of ddd right away. (ddd has its own manual, which you should read if you’ll be using it
1 Compiler optimizations are a notorious scapegoat for logic bugs. In the past, finger-pointing at the compiler was
more justified. In our experience, using modern systems and compilers, it is very unusual to find a case in which compiler optimization introduces bugs into working code. 2 We’re speaking of the original BSD dbx. We have used GDB exclusively for well over a decade. 3 ddd comes with many GNU/Linux systems. The source code is available from the GNU Project’s FTP site for ddd (ftp://ftp.gnu.org/gnu/ddd/).
15.3 GDB Basics
571
heavily.) Another graphical debugger is Insight,4 which uses Tcl/Tk to provide a graphical interface on top of GDB. (You should use a graphical debugger if one is available to you and you like it. Since our intent is to provide an introduction to debuggers and debugging, we’ve chosen to go with a simple interface that can be presented in print.) GDB understands C and C++, including support for name demangling, which means that you can use the regular C++ source code names for class member functions and overloaded functions. In particular, GDB understands C expression syntax, which is useful when you wish to look at the value of complicated expressions, such as ‘*ptr->x.a[1]->q’. It also understands Fortran 77, although you may have to append an underscore character to the Fortran variable and function names. GDB has partial support for Modula-2, and limited support for Pascal. If you’re running GNU/Linux or a BSD system (and you installed the development tools), then you should have a recent version of GDB already installed and ready to use. If not, you can download the GDB source code from the GNU Project’s FTP site for GDB5 and build it yourself. GDB comes with its own manual, which is over 300 pages long. You can generate the printable version of the manual in the GDB source code directory and print it yourself. You can also buy printed and bound copies from the Free Software Foundation; your purchase helps the FSF and contributes directly to the production of more free software. (See the FSF web site6 for ordering information.) This section describes the basics of GDB; we recommend reading the manual to learn how to take full advantage of GDB’s capabilities.
15.3.1 Running GDB The basic usage is this: gdb [ options ] [ executable [ core-file-name ]]
Here, executable is the executable program to be debugged. If provided, core-file-name is the name of a core file created when a program was killed by the
4 http://sources.redhat.com/insight/ 5 ftp://ftp.gnu.org/gnu/gdb/ 6 http://www.gnu.org
Chapter 15 • Debugging
572
operating system and dumped core. Under GNU/Linux, such files (by default) are named core.pid,7 where pid is the process ID number of the running program that died. The pid extension means you can have multiple core dumps in the same directory, which is helpful, but also good for consuming disk space! If you forget to name the files on the command line, you can use ‘file executable’ to tell GDB the name of the executable file, and ‘core-file core-file-name’ to tell GDB the name of the core file. With a core dump, GDB indicates where the program died. The following program, ch15-abort.c, creates a few nested function calls and then purposely dies by abort() to create a core dump: /* ch15-abort.c --- produce a core dump */ #include #include /* recurse --- build up some function calls */ void recurse(void) { static int i; if (++i == 3) abort(); else recurse(); } int main(int argc, char **argv) { recurse(); }
Here’s a short GDB session with this program: $ gcc -g ch15-abort.c -o ch15-abort Compile, no -O $ ch15-abort Run the program Aborted (core dumped) It dies miserably $ gdb ch15-abort core.4124 Start GDB on it GNU gdb 5.3 Copyright 2002 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions.
7 See sysctl(8) if you wish to change this behavior.
15.3 GDB Basics
573
There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "i686-pc-linux-gnu"... Core was generated by `ch15-abort'. Program terminated with signal 6, Aborted. Reading symbols from /lib/i686/libc.so.6...done. Loaded symbols for /lib/i686/libc.so.6 Reading symbols from /lib/ld-linux.so.2...done. Loaded symbols for /lib/ld-linux.so.2 #0 0x42028cc1 in kill () from /lib/i686/libc.so.6 (gdb) where Print stack trace #0 0x42028cc1 in kill () from /lib/i686/libc.so.6 #1 0x42028ac8 in raise () from /lib/i686/libc.so.6 #2 0x4202a019 in abort () from /lib/i686/libc.so.6 #3 0x08048342 in recurse () at ch15-abort.c:13 rnode, & rp);
Run current statement (1504, above) GDB prints next statement Hit ENTER to run it, and go to next
if (fp == NULL)
ENTER again save = tree = tree->lnode;
And again for (numnodes = 0; tree != NULL; tree = tree->rnode)
The step command is an alternative command for single-stepping. There is an important difference between next and step. next executes the next statement. If that statement contains a function call, the function is called and returns before GDB regains control of the running program. On the other hand, when you use step on a statement with a function call, GDB descends into the called function, allowing you to continue single-stepping (or tracing) the program. If a statement doesn’t contain a function call, then step is the same as next. NOTE It’s easy to forget which command you’re using and keep pressing ENTER to run each subsequent statement. If you’re using step, you can accidentally enter a library function, such as strlen() or printf(), which you really don’t want to bother with. In such a case, you can use the command finish, which causes the program to run until the current function returns.
You can print memory contents by using the print command. GDB understands C expression syntax, which makes it easy and natural to examine structures pointed to by pointers:
Chapter 15 • Debugging
576
(gdb) print *save Print the structure pointed to by save $1 = {sub = {nodep = {l = {lptr = 0x8095250, param_name = 0x8095250 "pR\t\b", ll = 134828624}, r = {rptr = 0x0, pptr = 0, preg = 0x0, hd = 0x0, av = 0x0, r_ent = 0}, x = {extra = 0x0, xl = 0, param_list = 0x0}, name = 0x0, number = 1, reflags = 0}, val = { fltnum = 6.6614191194446594e-316, sp = 0x0, slen = 0, sref = 1, idx = 0}, hash = {next = 0x8095250, name = 0x0, length = 0, value = 0x0, ref = 1}}, type = Node_expression_list, flags = 1}
Finally, the cont (continue) command lets you continue the program’s execution. It will run until the next breakpoint or until it exits normally if it doesn’t hit any breakpoints. This example picks up where the previous one left off: 1520 for (numnodes = 0; tree != NULL; tree = tree->rnode) (gdb) cont Continue Continuing. hello, world Program exited normally. (gdb) quit
Informative message from GDB Leave the debugger
A watchpoint is like a breakpoint, but for data instead of executable code. You set a watchpoint on a variable (or field in a struct or union, or array element), and when it changes, GDB notifies you. GDB checks the value of the watchpoint as it single-steps the program, and stops when the value changes. For example, the do_lint_old variable in gawk is true when the --lint-old option was issued. This variable is set to true by getopt_long(). (We covered getopt_long() in Section 2.1.2, “GNU Long Options,” page 27.) In gawk’s main.c file: int do_lint_old = FALSE; /* warn about stuff not in V7 awk */ ... static const struct option optab[] = { ... { "lint-old", no_argument, & do_lint_old, 1 }, ... };
Here’s a sample session, showing a watchpoint in action: $ gdb gawk Start GDB on gawk GNU gdb 5.3 ... (gdb) watch do_lint_old Set watchpoint on variable Hardware watchpoint 1: do_lint_old (gdb) run --lint-old 'BEGIN { print "hello, world" }' Run the program Starting program: /home/arnold/Gnu/gawk/gawk-3.1.4/gawk --lint-old 'BEGIN { print "hello, world" }' Hardware watchpoint 1: do_lint_old Hardware watchpoint 1: do_lint_old Hardware watchpoint 1: do_lint_old Watchpoint checked as program runs
15.4 Programming for Debugging
577
Hardware watchpoint 1: do_lint_old Hardware watchpoint 1: do_lint_old Old value = 0 Watchpoint stops the program New value = 1 0x420c4219 in _getopt_internal () from /lib/i686/libc.so.6 (gdb) where Stack trace #0 0x420c4219 in _getopt_internal () from /lib/i686/libc.so.6 #1 0x420c4e83 in getopt_long () from /lib/i686/libc.so.6 #2 0x080683a1 in main (argc=3, argv=0xbffff8a4) at main.c:293 #3 0x420158d4 in __libc_start_main () from /lib/i686/libc.so.6 (gdb) quit We’re done for now The program is running. Exit anyway? (y or n) y Yes, really
GDB can do much more than we’ve shown here. Although the GDB manual is large, it is worthwhile to read it in its entirety at least once, to familiarize yourself with its commands and capabilities. After that, it’s probably sufficient to look at the NEWS file in each new GDB distribution to see what’s new or changed. It’s also worth printing the GDB reference card which comes in the file gdb/doc/refcard.tex within the GDB source distribution. You can create a printable PostScript version of the reference card, after extracting the source and running configure, by using these commands: $ cd gdb/doc $ make refcard.ps
Change to doc subdirectory Format the reference card
The reference card is meant to be printed dual-sided, on 8.5 x 11 inch (“letter”) paper, in landscape format. It provides a six-column summary of the most useful GDB commands. We recommend printing it and having it by your keyboard as you work with GDB.
15.4 Programming for Debugging There are many techniques for making source code easier to debug, ranging from simple to involved. We look at a number of them in this section.
15.4.1 Compile-Time Debugging Code Several techniques relate to the source code itself. 15.4.1.1 Use Debugging Macros Perhaps the simplest compile-time technique is the use of the preprocessor to provide conditionally compiled code. For example:
Chapter 15 • Debugging
578
#ifdef DEBUG fprintf(stderr, "myvar = %d\n", myvar); fflush(stderr); #endif /* DEBUG */
Adding -DDEBUG to the compiler command line causes the call to fprintf() to execute when the program runs. Recommendation: Send debug messages to stderr so that they aren’t lost down a pipeline and so that they can be captured with an I/O redirection. Be sure to use fflush() so that messages are forced to the output as soon as possible. NOTE The symbol DEBUG, while obvious, is also highly overused. It’s a better idea to use a symbol specific to your program, such as MYAPPDEBUG. You can even use different symbols for debugging code in different parts of your program, such as file I/O, data verification, memory management, and so on.
Scattering lots of #ifdef statements throughout your code quickly becomes painful. And too many #ifdefs obscure the main program logic. There’s got to be a better way, and indeed, a technique that’s often used is to conditionally define special macros for printing: /* TECHNIQUE 1 --- commonly used but not recommended, see text */ /* In application header file: */ #ifdef MYAPPDEBUG #define DPRINT0(msg) fprintf(stderr, msg) #define DPRINT1(msg, v1) fprintf(stderr, msg, v1) #define DPRINT2(msg, v1, v2) fprintf(stderr, msg, v1, v2) #define DPRINT3(msg, v1, v2, v3) fprintf(stderr, msg, v1, v2, v3) #else /* ! MYAPPDEBUG */ #define DPRINT0(msg) #define DPRINT1(msg, v1) #define DPRINT2(msg, v1, v2) #define DPRINT3(msg, v1, v2, v3) #endif /* ! MYAPPDEBUG */ /* In application source file: */ DPRINT1("myvar = %d\n", myvar); ... DPRINT2("v1 = %d, v2 = %f\n", v1, v2);
There are multiple macros, one for each different number of arguments, up to whatever limit you wish to provide. When MYAPPDEBUG is defined, the calls to the DPRINTx() macros expand into calls to fprintf(). When MYAPPDEBUG isn’t defined,
15.4 Programming for Debugging
579
then those same calls expand to nothing. (This is essentially how assert() works; we described assert() in Section 12.1, “Assertion Statements: assert(),” page 428.) This technique works; we have used it ourselves and seen it recommended in textbooks. However, it can be refined a bit further, reducing the number of macros down to one: /* TECHNIQUE 2 --- most portable; recommended */ /* In application header file: */ #ifdef MYAPPDEBUG #define DPRINT(stuff) fprintf stuff #else #define DPRINT(stuff) #endif /* In application source file: */ DPRINT((stderr, "myvar = %d\n", myvar));
Note the double parentheses
Note how the macro is invoked, with two sets of parentheses! By making the entire argument list for fprintf() into a single argument, you no longer need to have an arbitrary number of debugging macros. If you are using a compiler that conforms to the 1999 C standard, you have an additional choice, which produces the cleanest-looking debugging code: /* TECHNIQUE 3 --- cleanest, but C99 only */ /* In application header file: */ #ifdef MYAPPDEBUG #define DPRINT(mesg, ...) fprintf(stderr, mesg, _ _VA_ARGS_ _) #else #define DPRINT(mesg, ...) #endif /* In application source file: */ DPRINT("myvar = %d\n", myvar); ... DPRINT("v1 = %d, v2 = %f\n", v1, v2);
The 1999 C standard provides variadic macros; that is, macros that can accept a variable number of arguments. (This is similar to variadic functions, like printf().) In the macro definition, the three periods, ‘...’, indicate that there will be zero or more arguments. In the macro body, the special identifier __VA_ARGS_ _ is replaced with the provided arguments, however many there are. The advantage to this mechanism is that only one set of parentheses is necessary when the debugging macro is invoked, making the code read much more naturally.
580
Chapter 15 • Debugging
It also preserves the ability to have just one macro name, instead of multiple names that vary according to the number of arguments. The disadvantage is that C99 compilers are not yet widely available, reducing the portability of this construct. (However, this situation will improve with time.) Recommendation: Current versions of GCC do support C99 variadic macros. Thus, if you know that you will never be using anything but GCC (or some other C99 compiler) to compile your program, you can use the C99 mechanism. However, as of this writing, C99 compilers are still not commonplace. So, if your code has to be compiled by different compilers, you should use the double-parentheses-style macro. 15.4.1.2 Avoid Expression Macros If Possible In general, C preprocessor macros are a rather sharp, two-edged sword. They provide you with great power, but they also provide a great opportunity to injure yourself.8 For efficiency or code clarity, it’s common to see macros such as this: #define RS_is_null (RS_node->var_value == Nnull_string) ... if (RS_is_null || today == TUESDAY) ...
At first glance, this looks fine. The condition ‘RS_is_null’ is clear and easy to understand, and abstracts the details inherent in the test. The problem comes when you try to print the value in GDB: (gdb) print RS_is_null No symbol "RS_is_null" in current context.
In such a case, you have to track down the definition of the macro and print the expanded value. Recommendation: Use variables to represent important conditions in your program, with explicit code to change the variable values when the conditions change. Here is an abbreviated example, from io.c in the gawk distribution:
8 Bjarne Stroustrup, the creator of C++, worked hard to make the use of the C preprocessor completely unnecessary in C++. In our opinion, he didn’t quite succeed: #include is still needed, but regular macros aren’t. For C, the
preprocessor remains a valuable tool, but it should be used judiciously.
15.4 Programming for Debugging
581
void set_RS() { ... RS_is_null = FALSE; ... if (RS->stlen == 0) { RS_is_null = TRUE; matchrec = rsnullscan; } ... }
Once RS_is_null is set and maintained, it can be tested by code and printed from within a debugger. NOTE Beginning with GCC 3.1 and version 5 of GDB, if you compile your program with the options -gdwarf-2 and -g3, you can use macros from within GDB. The GDB manual states that the GDB developers hope to eventually find a more compact representation for macros, and that the -g3 option will be subsumed into -g. However, only the combination of GCC, GDB, and the special options allows you to use macros this way: If you’re not using GCC (or if you’re using an older version), you still have the problem. We stand by our recommendation to avoid such macros if you can.
The problem with macros extends to code fragments as well. If a macro defines multiple statements, you can’t set a breakpoint inside the middle of the macro. This is also true of C99 and C++ inline functions: If the compiler substitutes the body of an inline function into the generated code, it is again difficult or impossible to set a breakpoint inside it. This ties in with our advice to compile with -g alone; in this case, compilers usually don’t do function inlining. Along similar lines, it’s common to have a variable that represents a particular state. It’s easy, and encouraged by many C programming books, to #define symbolic constants for these states. For example:
582
Chapter 15 • Debugging
/* The various states to be in when scanning for the end of a record. */ #define NOSTATE 1 /* scanning not started yet (all) */ #define INLEADER 2 /* skipping leading data (RS = "") */ #define INDATA 3 /* in body of record (all) */ #define INTERM 4 /* scanning terminator (RS = "", RS = regexp) */ int state; ... state = NOSTATE; ... state = INLEADER; ... if (state != INTERM) ...
At the source code level, this looks great. But again, there is a problem when you look at the code from within GDB: (gdb) print state $1 = 2
Here too, you’re forced to go back and look at the header file to figure out what the 2 means. So, what’s the alternative? Recommendation: Use enums instead of macros to define symbolic constants. The source code usage is the same, and the debugger can print the enums’ values too. An example, also from io.c in gawk: typedef enum scanstate { NOSTATE, /* scanning not started yet (all) */ INLEADER, /* skipping leading data (RS = "") */ INDATA, /* in body of record (all) */ INTERM, /* scanning terminator (RS = "", RS = regexp) */ } SCANSTATE; SCANSTATE state;
… rest of code remains unchanged! …
Now, when looking at state from within GDB, we see something useful: (gdb) print state $1 = NOSTATE
15.4.1.3 Reorder Code If Necessary It’s not uncommon to have a condition in an if or while consist of multiple component tests, separated by && or ||. If these tests are function calls (or even if they’re not), it’s impossible to single-step each separate part of the condition. GDB’s step and next commands work on the basis of statements, not expressions. (Splitting such things across lines doesn’t help, either.)
15.4 Programming for Debugging
583
Recommendation: Rewrite the original code, using explicit temporary variables that store return values or conditional results so that you can examine them in a debugger. The original code should be maintained in a comment so that you (or some later programmer) can tell what’s going on. Here’s a concrete example: the function do_input() from gawk’s file io.c: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34
/* do_input --- the main input processing loop */ void do_input() { IOBUF *iop; extern int exiting; int rval1, rval2, rval3; (void) setjmp(filebuf); /* for `nextfile' */ while ((iop = nextfile(FALSE)) != NULL) { /* * This was: if (inrec(iop) == 0) while (interpret(expression_value) && inrec(iop) == 0) continue; * Now expand it out for ease of debugging. */ rval1 = inrec(iop); if (rval1 == 0) { for (;;) { rval2 = rval3 = -1; /* for debugging */ rval2 = interpret(expression_value); if (rval2 != 0) rval3 = inrec(iop); if (rval2 == 0 || rval3 != 0) break; } } if (exiting) break; } }
(The line numbers are relative to the start of the routine, not the file.) This function is the heart of gawk’s main processing loop. The outer loop (lines 12 and 33) steps through the command-line data files. The comment on lines 13–19 shows the original code, which reads each record from the current file and processes it.
584
Chapter 15 • Debugging
A 0 return value from inrec() indicates an OK status, while a nonzero return value from interpret() indicates an OK status. When we tried to step through this loop, verifying the record reading process, it became necessary to perform each step individually. Lines 20–30 are the rewritten code, which calls each function separately, storing the return values in local variables so that they can be printed from the debugger. Note how line 23 forces these variables to have known, invalid values each time around the loop: Otherwise, they would retain their values from previous loop iterations. Line 27 is the exit test; because the code has changed to an infinite loop (compare line 22 to line 16), the test for breaking out of the loop is the opposite of the original test. As an aside, we admit to having had to study the rewrite carefully when we made it, to make sure it did exactly the same as the original code; it did. It occurs to us now that perhaps this version of the loop might be closer to the original: /* Possible replacement for lines 22 - 29 */ do { rval2 = rval3 = -1; /* for debugging */ rval2 = interpret(expression_value); if (rval2 != 0) rval3 = inrec(iop); } while (rval2 != 0 && rval3 == 0);
The truth is, both versions are harder to read than the original and thus potentially in error. However, since the current code works, we decided to leave well enough alone. Finally, we note that not all expert programmers would agree with our advice here. When each component of a condition is a function call, you can set a breakpoint on each one, use step to step into each function, and then use finish to complete the function. GDB will tell you the function’s return value, and from that point you can use cont or step to continue. We like our approach because the results are kept in variables, which can be checked (and rechecked) after the function calls, and even a few statements later. 15.4.1.4 Use Debugging Helper Functions A common technique, applicable in many cases, is to have a set of flag values; when a flag is set (that is, true), a certain fact is true or a certain condition applies. This is commonly done with #defined symbolic constants and the C bitwise operators.
15.4 Programming for Debugging
585
(We discussed the use of bit flags and the bit manipulation operators in the sidebar in Section 8.3.1, “POSIX Style: statvfs() and fstatvfs(),” page 244.) For example, gawk’s central data structure is called a NODE. It has a large number of fields, the last of which is a set of flag values. From the file awk.h: typedef struct exp_node { ... unsigned short flags; # define MALLOC 1 # define TEMP 2 # define PERM 4 # define STRING 8 # define STRCUR 16 # define NUMCUR 32 # define NUMBER 64 # define MAYBE_NUM 128 # #
define define
# define # define } NODE;
ARRAYMAXED 256 FUNC 512 FIELD 1024 INTLSTR 2048
Lots of stuff omitted /* /* /* /* /* /* /* /* * /* /* * /* /*
can be free'd */ should be free'd */ can't be free'd */ assigned as string */ string value is current */ numeric value is current */ assigned as number */ user input: if NUMERIC then a NUMBER */ array is at max size */ this parameter is really a function name; see awkgram.y */ this is a field */ use localized version */
The reason to use flag values is that they provide considerable savings in data space. If the NODE structure used a separate char field for each flag, that would use 12 bytes instead of the 2 used by the unsigned short. The current size of a NODE (on an Intel x86) is 32 bytes. Adding 10 more bytes would bump that to 42 bytes. Since gawk can allocate potentially hundreds of thousands (or even millions) of NODEs,9 keeping the size down is important. What does this have to do with debugging? Didn’t we just recommend using enums for symbolic constants? Well, in the case of OR’d values enums are no help, since they’re no longer individually recognizable! Recommendation: Provide a function to convert flags to a string. If you have multiple independent flags, set up a general-purpose routine.
9 Seriously! People often run megabytes of data through gawk. Remember, no arbitrary limits!
Chapter 15 • Debugging
586
NOTE What’s unusual about these debugging functions is that application code never calls them. They exist only so that they can be called from a debugger. Such functions should always be compiled in, without even a surrounding #ifdef, so that you can use them without having to take special steps. The (usually minimal) extra code size is justified by the developer’s time savings.
First we’ll show you how we did this initially. Here is (an abbreviated version of) flags2str() from an earlier version of gawk (3.0.6): 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
/* flags2str --- make a flags value readable */ char * flags2str(flagval) int flagval; { static char buffer[BUFSIZ]; char *sp; sp = buffer; if (flagval & MALLOC) { strcpy(sp, "MALLOC"); sp += strlen(sp); } if (flagval & TEMP) { if (sp != buffer) *sp++ = '|'; strcpy(sp, "TEMP"); sp += strlen(sp); } if (flagval & PERM) { if (sp != buffer) *sp++ = '|'; strcpy(sp, "PERM"); sp += strlen(sp); }
… much more of the same, omitted for brevity … 82 83 84
return buffer; }
(The line numbers are relative to the start of the function.) The result is a string, something like "MALLOC|PERM|NUMBER". Each flag is tested separately, and if present, each one’s action is the same: test if not at the beginning of the buffer so we can add
15.4 Programming for Debugging
587
the ‘|’ character, copy the string into place, and update the pointer. Similar functions existed for formatting and displaying the other kinds of flags in the program. The code is both repetitive and error prone, and for gawk 3.1 we were able to simplify and generalize it. Here’s how gawk now does it. Starting with this definition in awk.h: /* for debugging purposes */ struct flagtab { int val; const char *name; };
Integer flag value String name
This structure can be used to represent any set of flags with their corresponding string values. Each different group of flags has a corresponding function that returns a printable representation of the flags that are currently set. From eval.c: /* flags2str --- make a flags value readable */ const char * flags2str(int flagval) { static const struct flagtab values[] = { { MALLOC, "MALLOC" }, { TEMP, "TEMP" }, { PERM, "PERM" }, { STRING, "STRING" }, { STRCUR, "STRCUR" }, { NUMCUR, "NUMCUR" }, { NUMBER, "NUMBER" }, { MAYBE_NUM, "MAYBE_NUM" }, { ARRAYMAXED, "ARRAYMAXED" }, { FUNC, "FUNC" }, { FIELD, "FIELD" }, { INTLSTR, "INTLSTR" }, { 0, NULL }, }; return genflags2str(flagval, values); }
defines an array that maps flag values to strings. By convention, a 0 flag value indicates the end of the array. The code calls genflags2str() (“general flags to string”) to do the work. genflags2str() is a general-purpose routine that converts a flag value into a string. From eval.c: flags2str()
Chapter 15 • Debugging
588
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34
/* genflags2str --- general routine to convert a flag value to a string */ const char * genflags2str(int flagval, const struct flagtab *tab) { static char buffer[BUFSIZ]; char *sp; int i, space_left, space_needed; sp = buffer; space_left = BUFSIZ; for (i = 0; tab[i].name != NULL; i++) { if ((flagval & tab[i].val) != 0) { /* * note the trick, we want 1 or 0 for whether we need * the '|' character. */ space_needed = (strlen(tab[i].name) + (sp != buffer)); if (space_left < space_needed) fatal(_("buffer overflow in genflags2str")); if (sp != buffer) { *sp++ = '|'; space_left--; } strcpy(sp, tab[i].name); /* note ordering! */ space_left -= strlen(sp); sp += strlen(sp); } } return buffer; }
(Line numbers are relative to the start of the function, not the file.) As with the previous version, the idea here is to fill in a static buffer with a string value such as "MALLOC|PERM|STRING|MAYBE_NUM" and return the address of that buffer. We discuss the reasons for using a static buffer shortly; first let’s examine the code. The sp pointer tracks the position of the next empty spot in the buffer, while space_left tracks how much room is left; this keeps us from overflowing the buffer. The bulk of the function is a loop (line 12) through the array of flag values. When a flag is found (line 13), the code computes how much space is needed for the string (line 18) and tests to see if that much room is left (lines 19–20).
15.4 Programming for Debugging
589
The test ‘sp != buffer’ fails on the first flag value found, returning 0. On subsequent flags, the test has a value of 1. This tells us if we need the ‘|’ separator character between values. By adding the result (1 or 0) to the length of the string, we get the correct value for space_needed. The same test, for the same reason, is used on line 22 to control lines 23 and 24, which insert the ‘|’ character. Finally, lines 26–29 copy in the string value, adjust the amount of space left, and update the sp pointer. Line 33 returns the address of the buffer, which contains the printable representation of the string. Now, what about that static buffer? Normally, good programming practice discourages the use of functions that return the address of static buffers: It’s easy to have multiple calls to such a function overwrite the buffer each time, forcing the caller to copy the returned data. Furthermore, a static buffer is by definition a buffer of fixed size. What happened to the GNU “no arbitrary limits” principle? The answer to both of these questions is to remember that this is a debugging function. Normal code never calls genflags2str(); it’s only called by a human using a debugger. No caller holds a pointer to the buffer; as a developer doing debugging, we don’t care that the buffer gets overwritten each time we call the function. In practice, the fixed size isn’t an issue either; we know that BUFSIZ is big enough to represent all the flags that we use. Nevertheless, being experienced and knowing that things can change, genflags2str() has code to protect itself from overrunning the buffer. (The space_left variable and the code on lines 18–20.) As an aside, the use of BUFSIZ is arguable. That constant should be used exclusively for I/O buffers, but it is often used for general string buffers as well. Such code would be better off defining explicit constants, such as FLAGVALSIZE, and using ‘sizeof(buffer)’ on line 11. Here is an abbreviated GDB session showing flags2str() in use:
590
Chapter 15 • Debugging
$ gdb gawk Start GDB on gawk GNU gdb 5.3 ... (gdb) break do_print Set a breakpoint Breakpoint 1 at 0x805a584: file builtin.c, line 1547. (gdb) run 'BEGIN { print "hello, world" }' Start it running Starting program: /home/arnold/Gnu/gawk/gawk-3.1.4/gawk 'BEGIN { print "hello, world" }' Breakpoint 1, do_print (tree=0x80955b8) at builtin.c:1547 Breakpoint hit 1547 struct redirect *rp = NULL; (gdb) print *tree Print NODE $1 = {sub = {nodep = {l = {lptr = 0x8095598, param_name = 0x8095598 "xU\t\b", ll = 134829464}, r = {rptr = 0x0, pptr = 0, preg = 0x0, hd = 0x0, av = 0x0, r_ent = 0}, x = {extra = 0x0, xl = 0, param_list = 0x0}, name = 0x0, number = 1, reflags = 0}, val = { fltnum = 6.6614606209589101e-316, sp = 0x0, slen = 0, sref = 1, idx = 0}, hash = {next = 0x8095598, name = 0x0, length = 0, value = 0x0, ref = 1}}, type = Node_K_print, flags = 1} (gdb) print flags2str(tree->flags) Print flag value $2 = 0x80918a0 "MALLOC" (gdb) next Keep going 1553 fp = redirect_to_fp(tree->rnode, & rp); ... 1588 efwrite(t[i]->stptr, sizeof(char), t[i]->stlen, fp, "print", rp, FALSE); (gdb) print *t[i] Print NODE again $4 = {sub = {nodep = {l = {lptr = 0x8095598, param_name = 0x8095598 "xU\t\b", ll = 134829464}, r = {rptr = 0x0, pptr = 0, preg = 0x0, hd = 0x0, av = 0x0, r_ent = 0}, x = {extra = 0x8095ad8, xl = 134830808, param_list = 0x8095ad8}, name = 0xc , number = 1, reflags = 4294967295}, val = { fltnum = 6.6614606209589101e-316, sp = 0x8095ad8 "hello, world", slen = 12, sref = 1, idx = -1}, hash = {next = 0x8095598, name = 0x0, length = 134830808, value = 0xc, ref = 1}}, type = Node_val, flags = 29} (gdb) print flags2str(t[i]->flags) Print flag value $5 = 0x80918a0 "MALLOC|PERM|STRING|STRCUR"
We hope you’ll agree that the current general-purpose mechanism is considerably more elegant than the original one, and easier to use. Careful design and use of arrays of structs can often replace or consolidate repetitive code.
15.4 Programming for Debugging
591
15.4.1.5 Avoid Unions When Possible “There’s no such thing as a free lunch.” —Lazarus Long—
The C union is a relatively esoteric facility. It allows you to save memory by storing different items within the same physical space; how the program treats it depends on how it’s accessed: /* ch15-union.c --- brief demo of union usage. */ #include int main(void) { union i_f { int i; float f; } u; u.f = 12.34; /* Assign a floating point value */ printf("%f also looks like %#x\n", u.f, u.i); exit(0); }
Here is what happens when the program is run on an Intel x86 GNU/Linux system: $ ch15-union 12.340000 also looks like 0x414570a4
The program prints the bit pattern that represents a floating-point number as a hexadecimal integer. The storage for the two fields occupies the same memory; the difference is in how the memory is treated: u.f acts like a floating-point number, whereas the same bits in u.i act like an integer. Unions are particularly useful in compilers and interpreters, which often create a tree structure representing the structure of a source code file (called a parse tree). This models the way programming languages are formally described: if statements, while statements, assignment statements, and so on are all instances of the more generic “statement” type. Thus, a compiler might have something like this: struct if_stmt { ... }; struct while_stmt { ... }; struct for_stmt { ... };
Structure for IF statement Structure for WHILE statement Structure for FOR statement
… structures for other statement types … typedef enum stmt_type { IF, WHILE, FOR, ... } TYPE;
What we actually have
Chapter 15 • Debugging
592
/* This contains the type and unions of the individual kinds of statements. */ struct statement { TYPE type; union stmt { struct if_stmt if_st; struct while_stmt while_st; struct for_stmt for_st; ... } u; };
Along with the union, it is conventional to use macros to make the components of the union look like they were fields in a struct. For example: #define if_s #define while_s #define for_s ...
u.if_st u.while_st u.for_st
So can use s->if_s instead of s->u.if_st And so on …
At the level just presented, this seems reasonable and looks manageable. The real world, however, is a more complicated place, and practical compilers and interpreters often have several levels of nested structs and unions. This includes gawk, in which the definition of the NODE, its flag values, and macros for accessing union components takes over 120 lines!10 Here is enough of that definition to give you a feel for what’s happening: typedef struct exp_node { union { struct { union { struct exp_node *lptr; char *param_name; long ll; } l; union { ... } r; union { ... } x; char *name; short number; unsigned long reflags; ... } nodep;
10 We inherited this design. In general it works, but it does have its problems. The point of this section is to pass on the experience we’ve acquired working with unions.
15.4 Programming for Debugging
593
struct { AWKNUM fltnum; char *sp; size_t slen; long sref; int idx; } val; struct { struct exp_node *next; char *name; size_t length; struct exp_node *value; long ref; } hash; hnext sub.hash.next hname sub.hash.name hlength sub.hash.length hvalue sub.hash.value
#define #define #define #define ... } sub; NODETYPE type; unsigned short flags; ... } NODE;
#define vname sub.nodep.name #define exec_count sub.nodep.reflags #define #define #define #define #define #define
lnode sub.nodep.l.lptr nextp sub.nodep.l.lptr source_file sub.nodep.name source_line sub.nodep.number param_cnt sub.nodep.number param sub.nodep.l.param_name
#define #define #define #define
stptr stlen stref stfmt
sub.val.sp sub.val.slen sub.val.sref sub.val.idx
#define var_value lnode ...
The NODE has a union inside a struct inside a union inside a struct! (Ouch.) On top of that, multiple macro “fields” map to the same struct/union components, depending on what is actually stored in the NODE! (Ouch, again.) The benefit of this complexity is that the C code is relatively clear. Something like ‘NF_node->var_value->slen’ is straightforward to read.
594
Chapter 15 • Debugging
There is, of course, a price to pay for the flexibility that unions provide. When your debugger is deep down in the guts of your code, you can’t use the nice macros that appear in the source. You must use the real expansion.11 (And for that, you have to find the definition in the header file.) For example, compare ‘NF_node->var_value->slen’ to what it expands to: ‘NF_node->sub.nodep.l.lptr->sub.val.slen’! You must type the latter into GDB to look at your data value. Look again at this excerpt from the earlier GDB debugging session: (gdb) print *tree Print NODE $1 = {sub = {nodep = {l = {lptr = 0x8095598, param_name = 0x8095598 "xU\t\b", ll = 134829464}, r = {rptr = 0x0, pptr = 0, preg = 0x0, hd = 0x0, av = 0x0, r_ent = 0}, x = {extra = 0x0, xl = 0, param_list = 0x0}, name = 0x0, number = 1, reflags = 0}, val = { fltnum = 6.6614606209589101e-316, sp = 0x0, slen = 0, sref = 1, idx = 0}, hash = {next = 0x8095598, name = 0x0, length = 0, value = 0x0, ref = 1}}, type = Node_K_print, flags = 1}
That’s a lot of goop. However, GDB does make this a little easier to handle. You can use expressions like ‘($1).sub.val.slen’ to step through the tree and list data structures. There are other reasons to avoid unions. First of all, unions are unchecked. Nothing but programmer attention ensures that when you access one part of a union, you are accessing the same part that was last stored. We saw this in ch15-union.c, which accessed both of the union’s “identities” simultaneously. A second reason, related to the first, is to be careful of overlays in complicated nested struct/union combinations. For example, an earlier version of gawk12 had this code: /* n->lnode overlays the array size, don't unref it if array */ if (n->type != Node_var_array && n->type != Node_array_ref) unref(n->lnode);
11 Again, GCC 3.1 or newer and GDB 5 can let you use macros directly, but only if you’re using them together,
with specific options. This was described earlier, in Section 15.4.1.2, “Avoid Expression Macros If Possible,” page 580. 12 This part of the code has since been revised, and the example lines are no longer there.
15.4 Programming for Debugging
595
Originally, there was no if, just a call to unref(), which frees the NODE pointed to by n->lnode. However, it was possible to crash gawk at this point. You can imagine how long it took, in a debugger, to track down the fact that what was being treated as a pointer was in reality an array size! As an aside, unions are considerably less useful in C++. Inheritance and object-oriented features make data structure management a different ball game, one that is considerably safer. Recommendation: Avoid unions if possible. If not, design and code them carefully!
15.4.2 Runtime Debugging Code Besides things you add to your code at compile time, you can also add extra code to enable debugging features at runtime. This is particularly useful for applications that are installed in the field, where a customer’s system won’t have the source code installed (and maybe not even a compiler!). This section presents some runtime debugging techniques that we have used over the years, ranging from simple to more complex. Note that our treatment is by no means exhaustive. This is an area where it pays to have some imagination and to use it! 15.4.2.1 Add Debugging Options and Variables The simplest technique is to have a command-line option that enables debugging. Such an option can be conditionally compiled in when you are debugging. But it’s more flexible to leave the option in the production version of the program. (You may or may not also wish to leave the option undocumented as well. This has various tradeoffs: Documenting it can allow your customers or clients to learn more about the internals of your system, which you may not want. On the other hand, not documenting it seems rather sneaky. If you’re writing Open Source or Free Software, it’s better to document the option.) If your program is large, you may wish your debugging option to take an argument indicating what subsystem should be debugged. Based on the argument, you can set different flag variables or possibly different bit flags in a single debugging variable. Here is an outline of this technique:
Chapter 15 • Debugging
596
struct option options[] = { ... { "debug", required_argument, NULL, 'D' }, ... } int main(int argc, char **argv) { int c; while ((c = getopt_long(argc, argv, "...D:")) != -1) { switch (c) { ... case 'D': parse_debug(optarg); break; ... } } ... }
The parse_debug() function reads through the argument string. For example, it could be a comma- or space-separated string of subsystems, like "file,memory,ipc". For each valid subsystem name, the function would set a bit in a debugging variable: extern int debugging; void parse_debug(const char *subsystems) { char *sp; for (sp = subsystems; *sp != '\0';) { if (strncmp(sp, "file", 4) == 0) { debugging |= DEBUG_FILE; sp += 4; } else if (strncmp(sp, "memory", 6) == 0) { debugging |= DEBUG_MEM; sp += 6; } else if (strncmp(sp, "ipc", 3) == 0) { debugging |= DEBUG_IPC; sp += 3; ... } while (*sp == ' ' || *sp == ',') sp++; } }
Finally, application code can then test the flags:
15.4 Programming for Debugging
597
if ((debugging & DEBUG_FILE) != 0) ...
In the I/O part of the program
if ((debugging & DEBUG_MEM) != 0) ...
In the memory manager
It is up to you whether to use a single variable with flag bits, separate variables, or even a debugging array, indexed by symbolic constants (preferably from an enum). The cost of leaving the debugging code in your production executable is that the program will be larger. Depending on the placement of your debugging code, it may also be slower since the tests are always performed, but are always false until debugging is turned on. And, as mentioned, it may be possible for someone to learn about your program, which you may not want. Or worse, a malevolent user could enable so much debugging that the program slows to an unusable state! (This is called a denial of service attack.) The benefit, which can be great, is that your already installed program can be reinvoked with debugging turned on, without requiring you to build, and then download, a special version to your customer site. When the software is installed in remote places that may not have people around and all you can do is access the system remotely through the Internet (or worse, a slow telephone dial-in!), such a feature can be a lifesaver. Finally, you may wish to mix and match: use conditionally compiled debugging code for fine-grained, high-detail debugging, and save the always-present code for a coarser level of output. 15.4.2.2 Use Special Environment Variables Another useful trick is to have your application pay attention to special environment variables (documented or otherwise). This can be particularly useful for testing. Here’s another example from our experience with gawk, but first, some background. gawk uses a function named optimal_bufsize() to obtain the optimal buffer size
for I/O. For small files, the function returns the file size. Otherwise, if the filesystem defines a size to use for I/O, it returns that (the st_blksize member in the struct stat, see Section 5.4.2, “Retrieving File Information,” page 141). If that member isn’t available, optimal_bufsize() returns the BUFSIZ constant from . The original function (in posix/gawkmisc.c) looked like this:
Chapter 15 • Debugging
598
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32
/* optimal_bufsize --- determine optimal buffer size */ int optimal_bufsize(fd, stb) int optimal_bufsize(int fd, struct stat *stb); int fd; struct stat *stb; { /* force all members to zero in case OS doesn't use all of them. */ memset(stb, '\0', sizeof(struct stat)); /* * System V.n, n < 4, doesn't have the file system block size in the * stat structure. So we have to make some sort of reasonable * guess. We use stdio's BUFSIZ, since that is what it was * meant for in the first place. */ #ifdef HAVE_ST_BLKSIZE #define DEFBLKSIZE (stb->st_blksize ? stb->st_blksize : BUFSIZ) #else #define DEFBLKSIZE BUFSIZ #endif if (isatty(fd)) return BUFSIZ; if (fstat(fd, stb) == -1) fatal("can't stat fd %d (%s)", fd, strerror(errno)); if (lseek(fd, (off_t)0, 0) == -1) /* not a regular file */ return DEFBLKSIZE; if (stb->st_size > 0 && stb->st_size < DEFBLKSIZE) /* small file */ return stb->st_size; return DEFBLKSIZE; }
The constant DEFBLKSIZE is the “default block size”; that is, the value from the struct stat, or BUFSIZ. For terminals (line 23) or for files that aren’t regular files (lseek() fails, line 27), the return value is also BUFSIZ. For regular files that are small, the file size is used. In all other cases, DEFBLKSIZE is returned. Knowing the “optimal” buffer size is particularly useful on filesystems in which the block size is larger than BUFSIZ. We had a problem whereby one of our test cases worked perfectly on our development GNU/Linux system and every other Unix system we had access to. However, this test would fail consistently on certain other systems.
15.4 Programming for Debugging
599
For a long time, we could not get direct access to a failing system in order to run GDB. Eventually, however, we did manage to reproduce the problem; it turned out to be related to the size of the buffer gawk was using for reading data files: On the failing systems, the buffer size was larger than for our development system. We wanted a way to be able to reproduce the problem on our development machine: The failing system was nine time zones away, and running GDB interactively across the Atlantic Ocean is painful. We reproduced the problem by having optimal_bufsize() look at a special environment variable, AWKBUFSIZE. When the value is "exact", optimal_bufsize() always returns the size of the file, whatever that may be. If the value of AWKBUFSIZE is some integer number, the function returns that number. Otherwise, the function falls back to the previous algorithm. This allows us to run tests without having to constantly recompile gawk. For example, $ AWKBUFSIZE=42 make check
This runs the gawk test suite, using a buffer size of 42 bytes. (The test suite passes.) Here is the modified version of optimal_bufsize(): 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
/* optimal_bufsize --- determine optimal buffer size */ /* * Enhance this for debugging purposes, as follows: * * Always stat the file, stat buffer is used by higher-level code. * * if (AWKBUFSIZE == "exact") * return the file size * else if (AWKBUFSIZE == a number) * always return that number * else * if the size is < default_blocksize * return the size * else * return default_blocksize * end if * endif * * Hair comes in an effort to only deal with AWKBUFSIZE * once, the first time this routine is called, instead of * every time. Performance, dontyaknow. */ size_t optimal_bufsize(fd, stb) int fd;
Chapter 15 • Debugging
600
28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79
struct stat *stb; { char *val; static size_t env_val = 0; static short first = TRUE; static short exact = FALSE; /* force all members to zero in case OS doesn't use all of them. */ memset(stb, '\0', sizeof(struct stat)); /* always stat, in case stb is used by higher level code. */ if (fstat(fd, stb) == -1) fatal("can't stat fd %d (%s)", fd, strerror(errno)); if (first) { first = FALSE; if ((val = getenv("AWKBUFSIZE")) != NULL) { if (strcmp(val, "exact") == 0) exact = TRUE; else if (ISDIGIT(*val)) { for (; *val && ISDIGIT(*val); val++) env_val = (env_val * 10) + *val - '0'; return env_val; } } } else if (! exact && env_val > 0) return env_val; /* else fall through */ /* * System V.n, n < 4, doesn't have the file system block size in the * stat structure. So we have to make some sort of reasonable * guess. We use stdio's BUFSIZ, since that is what it was * meant for in the first place. */ #ifdef HAVE_ST_BLKSIZE #define DEFBLKSIZE (stb->st_blksize > 0 ? stb->st_blksize : BUFSIZ) #else #define DEFBLKSIZE BUFSIZ #endif if (S_ISREG(stb->st_mode) && 0 < stb->st_size && (stb->st_size < DEFBLKSIZE || exact)) return stb->st_size; return DEFBLKSIZE; }
/* /* /* /* /*
regular file */ non-zero size */ small file */ or debugging */ use file size */
15.4 Programming for Debugging
601
The comment on lines 3–23 explains the algorithm. Since searching the environment can be expensive and it only needs to be done once, the function uses several static variables to collect the appropriate information the first time. Lines 42–54 execute the first time the function is called, and only the first time. Line 43 enforces this condition by setting first to false. Lines 45–54 handle the environment variable, looking for either "exact" or a number. In the latter case, it converts the string value to decimal, saving it in env_val. (We probably should have used strtoul() here; it didn’t occur to us at the time.) Line 55 executes every time but the first. If a numeric value was given, the condition will be true and that value is returned (line 56). Otherwise, it falls through to the rest of the function. Lines 60–70 define DEFBLKSIZE; this part has not changed. Finally, lines 72–76 return the file size if appropriate. If not (line 78), DEFBLKSIZE is returned. We did fix the problem,13 but in the meantime, we left the new version of optimal_bufsize() in place, so that we could be sure the problem hasn’t reoccurred. The marginal increase in code size and complexity is more than offset by the increased flexibility we now have for testing. Furthermore, since this is production code, it’s easy to have a user in the field use this feature for testing, to determine if a similar problem has occurred. (So far, we haven’t had to ask for a test, but it’s nice to know that we could handle it if we had to.) 15.4.2.3 Add Logging Code It is often the case that your application program is running on a system on which you can’t use a debugger (such as at a customer site). In that case, your goal is to be able to examine the program’s internal state, but from the outside. The only way to do that is to have the program itself produce this information for you. There are multiple ways to do this: • Always log information to a specific file. This is simplest: The program always writes logging information. You can then look at the file at your convenience.
13 By rewriting the buffer management code!
602
Chapter 15 • Debugging
The disadvantage is that at some point the log file will consume all available disk space. Therefore, you should have multiple log files, with your program switching to a new one periodically. Brian Kernighan recommends naming the log files by day of the week: myapp.log.sun, myapp.log.mon, and so on. The advantage here is that you don’t have to manually move old files out of the way; you get a week’s worth of log files for free. • Write to a log file only if it already exists. When your program starts up, if the log file exists, it writes information to the log. Otherwise, it doesn’t. To enable logging, first create an empty log file. • Use a fixed-format for messages, one that can be easily parsed by scripting languages such as awk or Perl, for summary and report generation. • Alternatively, generate some form of XML, which is self-describing, and possibly convertible to other formats. (We’re not big fans of XML, but you shouldn’t let that stop you.) • Use syslog() to do logging; the final disposition of logging messages can be controlled by the system administrator. (syslog() is a fairly advanced interface; see the syslog(3) manpage.) Choosing how to log information is, of course, the easy part. The hard part is choosing what to log. As with all parts of program development, it pays to think before you code. Log information about critical variables. Check their values to make sure they’re in range or are otherwise what you expect. Log exceptional conditions; if something occurs that shouldn’t, log it, and if possible, keep going. The key is to log only the information you need to track down problems, no more and no less. 15.4.2.4 Runtime Debugging Files In a previous life, we worked for a startup company with binary executables of the product installed at customer sites. It wasn’t possible to attach a debugger to a running copy of the program or to run it from a debugger on the customer’s system. The main component of the product was not started directly from a command line, but indirectly, through shell scripts that did considerable initial setup.
15.4 Programming for Debugging
603
To make the program start producing logging information, we came up with the idea of special debugging files. When a file of a certain name existed in a certain directory, the program would produce informational messages to a log file that we could then download and analyze. Such code looks like this: struct stat sbuf; extern int do_logging;
/* initialized to zero */
if (stat("/path/to/magic/.file", &sbuf) == 0) do_logging = TRUE; ... if (do_logging) { logging code here: open file, write info, close file, etc. }
The call to stat() happened for each job the program processed. Thus, we could dynamically enable and disable logging without having to stop and restart the application! As with debugging options and variables, there are any number of variations on this theme: different files that enable logging of information about different subsystems, debugging directives added into the debugging file itself, and so on. As with all features, you should plan a design for what you will need and then implement it cleanly instead of hacking out some quick and dirty code at 3:00 A.M. (a not uncommon possibility in startup companies, unfortunately). NOTE All that glitters is not gold. Special debugging files are but one example of techniques known as back doors—one or more ways for developers to do undocumented things with a program, usually for nefarious purposes. In our instance, the back door was entirely benign. But an unscrupulous developer could just as easily arrange to generate and download a hidden copy of a customer list, personnel file, or other sensitive data. For this reason alone, you should think extra hard about whether this technique is usable in your application.
15.4.2.5 Add Special Hooks for Breakpoints Often, a problem may be reproducible, but only after your program has first processed many megabytes of input data. Or, while you may know in which function your program is failing, the failure occurs only after the function has been called many hundreds, or even thousands, of times.
Chapter 15 • Debugging
604
This is a big problem when you’re working in a debugger. If you set a breakpoint in the failing routine, you have to type the continue command and press ENTER hundreds or thousands of times to get your program into the state where it’s about to fail. This is tedious and error prone, to say the least! It may even be so difficult to do that you’ll want to give up before starting. The solution is to add special debugging “hook” functions that your program can call when it is close to the state you’re interested in. For example, suppose that you know that the check_salary() function is the one that fails, but only when it’s been called 1,427 times. (We kid you not; we’ve seen some rather strange things in our time.) To catch check_salary() before it fails, create a special dummy function that does nothing but return, and then arrange for check_salary() to call it just before the 1,427th time that it itself is called: /* debug_dummy --- debugging hook function */ void debug_dummy(void) { return; } struct salary *check_salary(void) {
… real variable declarations here … static int count = 0;
/* for debugging */
if (++count == 1426) debug_dummy();
… rest of the code here … }
Now, from within GDB, set a breakpoint in debug_dummy(), and then run the program normally: (gdb) break debug_dummy Set breakpoint for dummy function Breakpoint 1 at 0x8055885: file whizprog.c, line 3137. (gdb) run Start program running
Once the breakpoint for debug_dummy() is reached, you can set a second breakpoint for check_salary() and then continue execution: (gdb) run Starting program: /home/arnold/whizprog
Start program running
Breakpoint 1, debug_dummy () at whizprog.c, line 3137 3137 void debug_dummy(void) { return; } Breakpoint reached (gdb) break check_salary Set breakpoint for function of interest Breakpoint 2 at 0x8057913: file whizprog.c, line 3140. (gdb) cont
15.5 Debugging Tools
605
When the second breakpoint is reached, the program is about to fail and you can single-step through it, doing whatever is necessary to track down the problem. Instead of using a fixed constant (‘++count == 1426’), you may wish to have a global variable that can be set by the debugger to whatever value you need. This avoids the need to recompile the program. For gawk, we have gone a step further and brought the debugging hook facility into the language, so the hook function can be called from the awk program. When compiled for debugging, a special do-nothing function named stopme() is available. This function in turn calls a C function of the same name. This allows us to put calls to stopme() into a failing awk program right before things go wrong. For example, if gawk is producing bad results for an awk program on the 1,200th input record, we can add a line like this to the awk program: NR == 1198 { stopme() }
# Stop for debugging when Number of Records == 1198
… rest of awk program as before …
Then, from within GDB, we can set a breakpoint on the C function stopme() and run the awk program. Once that breakpoint fires, we can then set breakpoints on the other parts of gawk where we suspect the real problem lies. The hook-function technique is useful in and of itself. However, the ability to bring it to the application level multiplies its usefulness, and it has saved us untold hours of debugging time when tracking down obscure problems.
15.5 Debugging Tools Besides GDB and whatever source code hooks you use for general debugging, there are a number of useful packages that can help find different kinds of problems. Because dynamic memory management is such a difficult task in large-scale programs, many tools focus on that area, often acting as drop-in replacements for malloc() and free(). There are commercial tools that do many (or all) of the same things as the programs we describe, but not all of them are available for GNU/Linux, and many are quite expensive. All of the packages discussed in this section are freely available.
Chapter 15 • Debugging
606
15.5.1 The dbug Library — A Sophisticated printf() The first package we examine is the dbug library. It is based on the idea of conditionally compiled debugging code we presented earlier in this chapter, but carries things much further, providing relatively sophisticated runtime tracing and conditional debug output. It implements many of the tips we described, saving you the trouble of implementing them yourself. The dbug library, written by Fred Fish in the early 1980s, has seen modest enhancements since then. It is now explicitly in the public domain, so it can be used in both free and proprietary software, without problems. It is available from Fred Fish’s FTP archive,14 as both a compressed tar file, and as a ZIP archive. The documentation summarizes dbug well: is an example of an internal debugger. Because it requires internal instrumentation of a program, and its usage does not depend on any special capabilities of the execution environment, it is always available and will execute in any environment that the program itself will execute in. In addition, since it is a complete package with a specific user interface, all programs which use it will be provided with similar debugging capabilities. This is in sharp contrast to other forms of internal instrumentation where each developer has their own, usually less capable, form of internal debugger… dbug
The dbug package imposes only a slight speed penalty on executing programs, typically much less than 10 percent, and a modest size penalty, typically 10 to 20 percent. By defining a specific C preprocessor symbol both of these can be reduced to zero with no changes required to the source code. The following list is a quick summary of the capabilities of the dbug package. Each capability can be individually enabled or disabled at the time a program is invoked by specifying the appropriate command line arguments. • Execution trace showing function level control flow in a semi-graphical manner using indentation to indicate nesting depth. • Output the values of all, or any subset of, key internal variables.
14 ftp://ftp.ninemoons.com/pub/dbug/
15.5 Debugging Tools
607
• Limit actions to a specific set of named functions. • Limit function trace to a specified nesting depth. • Label each output line with source file name and line number. • Label each output line with name of current process. • Push or pop internal debugging state to allow execution with built-in debugging defaults. • Redirect the debug output stream to standard output (stdout) or a named file. The default output stream is standard error (stderr). The redirection mechanism is completely independent of normal command line redirection to avoid output conflicts. The dbug package requires you to use a certain discipline when writing your code. In particular, you have to use its macros when doing a function return or calling setjmp() and longjmp(). You have to add a single macro call as the first executable statement of each function and call a few extra macros from main(). Finally, you have to add a debugging command-line option: By convention this is -#, which is rarely, if ever, used as a real option. In return for the extra work, you get all the benefits just outlined. Let’s look at the example in the manual: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
#include #include "dbug.h" int main (argc, argv) int argc; char *argv[]; { register int result, ix; extern int factorial (), atoi (); DBUG_ENTER ("main"); DBUG_PROCESS (argv[0]); DBUG_PUSH_ENV ("DBUG"); for (ix = 1; ix < argc && argv[ix][0] == '-'; ix++) { switch (argv[ix][1]) { case '#': DBUG_PUSH (&(argv[ix][2])); break; } }
Chapter 15 • Debugging
608
22 23 24 25 26 27 28 29
for (; ix < argc; ix++) { DBUG_PRINT ("args", ("argv[%d] = %s", ix, argv[ix])); result = factorial (atoi (argv[ix])); printf ("%d\n", result); fflush (stdout); } DBUG_RETURN (0); }
This program illustrates most of the salient points. The DBUG_ENTER() macro (line 12) must be called after any variable declarations and before any other code. (This is because it declares some private variables of its own.15) The DBUG_PROCESS() macro (line 13) sets the name of the program, primarily for use in output messages from the library. This macro should be called only once, from main(). The DBUG_PUSH_ENV() macro (line 14) causes the library to look at the named environment variable (DBUG in this case) for control strings. (The dbug control strings are discussed shortly.) The library is capable of saving its current state and using a new one, creating a stack of saved states. Thus, this macro pushes the state obtained from the given environment variable onto the stack of saved states. As used in this example, the macro creates the initial state. If there is no such environment variable, nothing happens. (As an aside, DBUG is rather generic; perhaps something like GAWK_DBUG [for gawk] would be better.) The DBUG_PUSH() macro (line 18) passes in the control string value obtained from the -# command-line option. (New code should use getopt() or getopt_long() instead of manual argument parsing.) This is normally how debugging is enabled, but using an environment variable as well provides additional flexibility. The DBUG_PRINT() macro (line 23) is what produces output. The second argument uses the technique we described earlier (see Section 15.4.1.1, “Use Debugging Macros,” page 577) of enclosing the entire printf() argument list in parentheses, making it a single argument as far as the C preprocessor is concerned. Note that a terminating newline character is not provided in the format string; the dbug library provides the newline for you.
15 C99, which allows variable declarations mixed with executable code, makes this less of a problem, but remember
that this package was designed for K&R C.
15.5 Debugging Tools
609
When printing, by default, the dbug library outputs all DBUG_PRINT() statements. The first argument is a string that can be used to limit the output just to DBUG_PRINT() macros using that string. Finally, the DBUG_RETURN() macro (line 28) is used instead of a regular return statement to return a value. There is a corresponding DBUG_VOID_RETURN macro for use in void functions. The rest of the program is completed with the factorial() function: 1 2 3 4 5 6 7 8 9 10 11 12 13 14
#include #include "dbug.h" int factorial (value) register int value; { DBUG_ENTER ("factorial"); DBUG_PRINT ("find", ("find %d factorial", value)); if (value > 1) { value *= factorial (value - 1); } DBUG_PRINT ("result", ("result is %d", value)); DBUG_RETURN (value); }
Once the program is compiled and linked with the dbug library, it can be run normally. By default, the program produces no debugging output. With debugging enabled, though, different kinds of output are available: $ 1 2 6 $ | | 1 | | | | 2 | | | | | | 6
factorial 1 2 3
Regular run, no debugging
factorial -#t 1 2 3 >factorial factorial | >factorial | factorial | | >factorial | |