changeset 2362:6aa56b089139

[xemacs-hg @ 2004-11-02 09:51:04 by ben] To: xemacs-patches@xemacs.org internals/index.texi: Deleted. Incorporated into internals.texi. Having a separate index file messes up texinfo-master-menu. internals/internals.texi: Add bunches and bunches and bunches and bunches of stuff, taken from documentation floating around in various places -- text.c, file-coding.c, other .c and .h files, stuff that I wrote up for an old XEmacs contract, proposals written up in the process of an e-mail discussion, etc. Fix up some mistakes, esp. in CCL. Extra crap from CCL, duplicated with Lispref, removed. Sections on Old Future Work and Future Work Discussion added. Bunches of other work. Add bunches of documentation taken from the source code. Fixup various places to use @strong{}, @code{}, @file{}. Create new Text chapter, split off from Buffers and Textual Representation. Create new chapter for MS Windows, mostly written from scratch. Consolidate all Mule info under "Multilingual Support". Break up chapter on modules and move some parts to the sections discussing the modules, for consolidation purposes. Add a big cross-reference table for all the modules to where they're discussed (or not). New chapter Asynchronous Events; Quit Checking. (Taken from various parts of the code.) New Introduction. New section on Focus Handling (from the code). NOTE that in the process, I discovered that we essentially have FOUR redundant introductions to Mule issues! Someone really needs to go through and clean them up and integrate them (sjt?).
author ben
date Tue, 02 Nov 2004 09:51:18 +0000
parents 5ff532e448b5
children 9b5d77fbb8c3
files man/ChangeLog man/internals/internals.texi
diffstat 2 files changed, 11854 insertions(+), 2109 deletions(-) [+]
line wrap: on
line diff
--- a/man/ChangeLog	Mon Nov 01 22:51:20 2004 +0000
+++ b/man/ChangeLog	Tue Nov 02 09:51:18 2004 +0000
@@ -1,3 +1,262 @@
+2004-11-02  Ben Wing  <ben@xemacs.org>
+
+	* internals/index.texi:
+	Deleted.  Incorporated into internals.texi.  Having a separate
+	index file messes up texinfo-master-menu.
+	
+	* internals/internals.texi:
+	* internals/internals.texi (Top):
+	* internals/internals.texi (Introduction):
+	* internals/internals.texi (Authorship of XEmacs):
+	* internals/internals.texi (A History of Emacs):
+	* internals/internals.texi (Through Version 18):
+	* internals/internals.texi (Lucid Emacs):
+	* internals/internals.texi (GNU Emacs 19):
+	* internals/internals.texi (GNU Emacs 20):
+	* internals/internals.texi (XEmacs):
+	* internals/internals.texi (XEmacs From the Outside):
+	* internals/internals.texi (The Lisp Language):
+	* internals/internals.texi (XEmacs From the Perspective of Building):
+	* internals/internals.texi (The XEmacs Object System (Abstractly Speaking)):
+	* internals/internals.texi (How Lisp Objects Are Represented in C):
+	* internals/internals.texi (Major Textual Changes):
+	* internals/internals.texi (Great Integral Type Renaming):
+	* internals/internals.texi (Text/Char Type Renaming):
+	* internals/internals.texi (Rules When Writing New C Code):
+	* internals/internals.texi (A Reader's Guide to XEmacs Coding Conventions):
+	* internals/internals.texi (General Coding Rules):
+	* internals/internals.texi (Object-Oriented Techniques for C):
+	* internals/internals.texi (Writing Lisp Primitives):
+	* internals/internals.texi (Writing Good Comments):
+	* internals/internals.texi (Adding Global Lisp Variables):
+	* internals/internals.texi (Writing Macros):
+	* internals/internals.texi (Proper Use of Unsigned Types):
+	* internals/internals.texi (Techniques for XEmacs Developers):
+	* internals/internals.texi (Regression Testing XEmacs):
+	* internals/internals.texi (How to Regression-Test):
+	* internals/internals.texi (Modules for Regression Testing):
+	* internals/internals.texi (CVS Techniques):
+	* internals/internals.texi (Merging a Branch into the Trunk):
+	* internals/internals.texi (The Modules of XEmacs):
+	* internals/internals.texi (A Summary of the Various XEmacs Modules):
+	* internals/internals.texi (Low-Level Modules):
+	* internals/internals.texi (Basic Lisp Modules):
+	* internals/internals.texi (Modules for Standard Editing Operations):
+	* internals/internals.texi (Modules for Interfacing with the File System):
+	* internals/internals.texi (Modules for Other Aspects of the Lisp Interpreter and Object System):
+	* internals/internals.texi (Modules for Interfacing with the Operating System):
+	* internals/internals.texi (Allocation of Objects in XEmacs Lisp):
+	* internals/internals.texi (Introduction to Allocation):
+	* internals/internals.texi (Garbage Collection):
+	* internals/internals.texi (GCPROing):
+	* internals/internals.texi (Garbage Collection - Step by Step):
+	* internals/internals.texi (Invocation):
+	* internals/internals.texi (garbage_collect_1):
+	* internals/internals.texi (mark_object):
+	* internals/internals.texi (gc_sweep):
+	* internals/internals.texi (sweep_lcrecords_1):
+	* internals/internals.texi (compact_string_chars):
+	* internals/internals.texi (Integers and Characters):
+	* internals/internals.texi (Allocation from Frob Blocks):
+	* internals/internals.texi (lrecords):
+	* internals/internals.texi (Low-level allocation):
+	* internals/internals.texi (Cons):
+	* internals/internals.texi (Vector):
+	* internals/internals.texi (Symbol):
+	* internals/internals.texi (Marker):
+	* internals/internals.texi (String):
+	* internals/internals.texi (Dumping):
+	* internals/internals.texi (Dumping Justification):
+	* internals/internals.texi (Overview):
+	* internals/internals.texi (Data descriptions):
+	* internals/internals.texi (Dumping phase):
+	* internals/internals.texi (Object inventory):
+	* internals/internals.texi (Address allocation):
+	* internals/internals.texi (The header):
+	* internals/internals.texi (Data dumping):
+	* internals/internals.texi (Pointers dumping):
+	* internals/internals.texi (Reloading phase):
+	* internals/internals.texi (Remaining issues):
+	* internals/internals.texi (Events and the Event Loop):
+	* internals/internals.texi (Introduction to Events):
+	* internals/internals.texi (Main Loop):
+	* internals/internals.texi (Specifics of the Event Gathering Mechanism):
+	* internals/internals.texi (Specifics About the Emacs Event):
+	* internals/internals.texi (Event Queues):
+	* internals/internals.texi (Event Stream Callback Routines):
+	* internals/internals.texi (IMPORTANT): New.
+	* internals/internals.texi (Other Event Loop Functions):
+	* internals/internals.texi (Stream Pairs):
+	* internals/internals.texi (Converting Events):
+	* internals/internals.texi (Dispatching Events; The Command Builder):
+	* internals/internals.texi (Focus Handling):
+	* internals/internals.texi (Editor-Level Control Flow Modules):
+	* internals/internals.texi (Asynchronous Events; Quit Checking):
+	* internals/internals.texi (Control-G (Quit) Checking):
+	* internals/internals.texi (completely): New.
+	* internals/internals.texi (Profiling):
+	* internals/internals.texi (Exiting):
+	* internals/internals.texi (BEWARE): New.
+	* internals/internals.texi (Evaluation; Stack Frames; Bindings):
+	* internals/internals.texi (Evaluation):
+	* internals/internals.texi (Dynamic Binding; The specbinding Stack; Unwind-Protects):
+	* internals/internals.texi (Simple Special Forms):
+	* internals/internals.texi (Catch and Throw):
+	* internals/internals.texi (Introduction to Symbols):
+	* internals/internals.texi (Obarrays):
+	* internals/internals.texi (Symbol Values):
+	* internals/internals.texi (Buffers):
+	* internals/internals.texi (Introduction to Buffers):
+	* internals/internals.texi (Buffer Lists):
+	* internals/internals.texi (Markers and Extents):
+	* internals/internals.texi (The Buffer Object):
+	* internals/internals.texi (Text):
+	* internals/internals.texi (The Text in a Buffer):
+	* internals/internals.texi (Ibytes and Ichars):
+	* internals/internals.texi (Byte-Char Position Conversion):
+	* internals/internals.texi (Searching and Matching):
+	* internals/internals.texi (Multilingual Support):
+	* internals/internals.texi (Introduction to Multilingual Issues #1):
+	* internals/internals.texi (Introduction to Multilingual Issues #2):
+	* internals/internals.texi (Introduction to Multilingual Issues #3):
+	* internals/internals.texi (Introduction to Multilingual Issues #4):
+	* internals/internals.texi (Character Sets):
+	* internals/internals.texi (Encodings):
+	* internals/internals.texi (Japanese EUC (Extended Unix Code)):
+	* internals/internals.texi (JIS7):
+	* internals/internals.texi (Internal Mule Encodings):
+	* internals/internals.texi (Internal String Encoding):
+	* internals/internals.texi (Internal Character Encoding):
+	* internals/internals.texi (Byte/Character Types; Buffer Positions; Other Typedefs):
+	* internals/internals.texi (Byte Types):
+	* internals/internals.texi (Different Ways of Seeing Internal Text):
+	* internals/internals.texi (prefixes): New.
+	* internals/internals.texi (C): New.
+	* internals/internals.texi (U): New.
+	* internals/internals.texi (S): New.
+	* internals/internals.texi (Specifically): New.
+	* internals/internals.texi (Buffer Positions):
+	* internals/internals.texi (Other Typedefs):
+	* internals/internals.texi (Usage of the Various Representations):
+	* internals/internals.texi (Working With the Various Representations):
+	* internals/internals.texi (Internal Text API's):
+	* internals/internals.texi (Basic internal-format API's):
+	* internals/internals.texi (The DFC API):
+	* internals/internals.texi (The Eistring API):
+	* internals/internals.texi (Coding for Mule):
+	* internals/internals.texi (Character-Related Data Types):
+	* internals/internals.texi (Working With Character and Byte Positions):
+	* internals/internals.texi (Conversion to and from External Data):
+	* internals/internals.texi (General Guidelines for Writing Mule-Aware Code):
+	* internals/internals.texi (An Example of Mule-Aware Code):
+	* internals/internals.texi (Mule-izing Code):
+	* internals/internals.texi (CCL):
+	* internals/internals.texi (Modules for Internationalization):
+	* internals/internals.texi (The Lisp Reader and Compiler):
+	* internals/internals.texi (Lstreams):
+	* internals/internals.texi (Creating an Lstream):
+	* internals/internals.texi (Lstream Types):
+	* internals/internals.texi (Lstream Functions):
+	* internals/internals.texi (Lstream Methods):
+	* internals/internals.texi (Introduction to Consoles; Devices; Frames; Windows):
+	* internals/internals.texi (Point):
+	* internals/internals.texi (Window Hierarchy):
+	* internals/internals.texi (The Window Object):
+	* internals/internals.texi (Modules for the Basic Displayable Lisp Objects):
+	* internals/internals.texi (The Redisplay Mechanism):
+	* internals/internals.texi (Critical Redisplay Sections):
+	* internals/internals.texi (Line Start Cache):
+	* internals/internals.texi (Redisplay Piece by Piece):
+	* internals/internals.texi (Modules for the Redisplay Mechanism):
+	* internals/internals.texi (Modules for other Display-Related Lisp Objects):
+	* internals/internals.texi (Introduction to Extents):
+	* internals/internals.texi (Extent Ordering):
+	* internals/internals.texi (Format of the Extent Info):
+	* internals/internals.texi (Zero-Length Extents):
+	* internals/internals.texi (Mathematics of Extent Ordering):
+	* internals/internals.texi (Extent Fragments):
+	* internals/internals.texi (Faces):
+	* internals/internals.texi (Glyphs):
+	* internals/internals.texi (Specifiers):
+	* internals/internals.texi (Menus):
+	* internals/internals.texi (Subprocesses):
+	* internals/internals.texi (Interface to MS Windows):
+	* internals/internals.texi (Different kinds of Windows environments):
+	* internals/internals.texi (Windows Build Flags):
+	* internals/internals.texi (Windows I18N Introduction):
+	* internals/internals.texi (Modules for Interfacing with MS Windows):
+	* internals/internals.texi (Interface to the X Window System):
+	* internals/internals.texi (Generic Widget Interface):
+	* internals/internals.texi (Scrollbars):
+	* internals/internals.texi (Menubars):
+	* internals/internals.texi (Checkboxes and Radio Buttons):
+	* internals/internals.texi (Modules for Interfacing with X Windows):
+	* internals/internals.texi (Future Work):
+	* internals/internals.texi (Future Work -- Elisp Compatibility Package):
+	* internals/internals.texi (Future Work -- Drag-n-Drop):
+	* internals/internals.texi (Future Work -- Standard Interface for Enabling Extensions):
+	* internals/internals.texi (Future Work -- Better Initialization File Scheme):
+	* internals/internals.texi (Future Work -- Keyword Parameters):
+	* internals/internals.texi (Future Work -- Property Interface Changes):
+	* internals/internals.texi (Future Work -- Easier Toolbar Customization):
+	* internals/internals.texi (Future Work -- Toolbar Interface Changes):
+	* internals/internals.texi (Future Work -- Menu API Changes):
+	* internals/internals.texi (Future Work -- Removal of Misc-User Event Type):
+	* internals/internals.texi (Future Work -- Mouse Pointer):
+	* internals/internals.texi (Future Work -- Abstracted Mouse Pointer Interface):
+	* internals/internals.texi (Future Work -- Busy Pointer):
+	* internals/internals.texi (Future Work -- Extents):
+	* internals/internals.texi (Future Work -- Everything should obey duplicable extents):
+	* internals/internals.texi (Future Work -- Version Number and Development Tree Organization):
+	* internals/internals.texi (Future Work -- Improvements to the @code{xemacs.org} Website):
+	* internals/internals.texi (Future Work -- Keybinding Schemes):
+	* internals/internals.texi (Future Work -- Better Support for Windows Style Key Bindings):
+	* internals/internals.texi (Future Work -- Misc Key Binding Ideas):
+	* internals/internals.texi (Future Work -- Byte Code Snippets):
+	* internals/internals.texi (Future Work -- Autodetection):
+	* internals/internals.texi (Future Work -- Conversion Error Detection):
+	* internals/internals.texi (Future Work -- BIDI Support):
+	* internals/internals.texi (Future Work -- Localized Text/Messages):
+	* internals/internals.texi (freeze): New.
+	* internals/internals.texi (fail-safe): New.
+	* internals/internals.texi (like): New.
+	* internals/internals.texi (user): New.
+	* internals/internals.texi (ben): New.
+	* internals/internals.texi ('type): New.
+	* internals/internals.texi (NOTE): New.
+	* internals/internals.texi (ILLEGIBLE): New.
+	* internals/internals.texi (language): New.
+	* internals/internals.texi (preprocessing): New.
+	* internals/internals.texi (Subject): New.
+	* internals/internals.texi (http): New.
+	* internals/internals.texi (Now): Removed.
+	* internals/internals.texi (wrong): New.
+	* internals/internals.texi (Proof): Removed.
+
+	Add bunches and bunches and bunches and bunches of stuff, taken
+	from documentation floating around in various places -- text.c,
+	file-coding.c, other .c and .h files, stuff that I wrote up for an
+	old XEmacs contract, proposals written up in the process of an
+	e-mail discussion, etc.  Fix up some mistakes, esp. in CCL.  Extra
+	crap from CCL, duplicated with Lispref, removed.  Sections on Old
+	Future Work and Future Work Discussion added.
+
+	Bunches of other work.  Add bunches of documentation taken from the
+	source code.  Fixup various places to use @strong{}, @code{},
+	@file{}.  Create new Text chapter, split off from Buffers and
+	Textual Representation.  Create new chapter for MS Windows, mostly
+	written from scratch.  Consolidate all Mule info under
+	"Multilingual Support".  Break up chapter on modules and move some
+	parts to the sections discussing the modules, for consolidation
+	purposes.  Add a big cross-reference table for all the modules to
+	where they're discussed (or not).  New chapter Asynchronous
+	Events; Quit Checking. (Taken from various parts of the code.) New
+	Introduction.  New section on Focus Handling (from the code).
+
+	NOTE that in the process, I discovered that we essentially have
+	FOUR redundant introductions to Mule issues!  Someone really needs
+	to go through and clean them up and integrate them (sjt?).
+
 2003-07-18  Alexey Mahotkin  <alexm@hsys.msk.ru>
 
 	* lispref/windows.texi (Basic Windows): Fix typo.
--- a/man/internals/internals.texi	Mon Nov 01 22:51:20 2004 +0000
+++ b/man/internals/internals.texi	Tue Nov 02 09:51:18 2004 +0000
@@ -10,7 +10,17 @@
 * Internals: (internals).       XEmacs Internals Manual.
 @end direntry
 
-Copyright @copyright{} 1992 - 1996 Ben Wing.
+Edition History:
+
+Created November 1995 (?) by Ben Wing.
+XEmacs Internals Manual Version 1.0, March, 1996.
+XEmacs Internals Manual Version 1.1, March, 1997.
+XEmacs Internals Manual Version 1.4, March, 2001.
+XEmacs Internals Manual Version 21.5, October, 2004.
+@c Please REMEMBER to update edition number in *four* places in this file,
+@c including adding a line above.
+
+Copyright @copyright{} 1992 - 2004 Ben Wing.
 Copyright @copyright{} 1996, 1997 Sun Microsystems.
 Copyright @copyright{} 1994 - 1998, 2002, 2003 Free Software Foundation.
 Copyright @copyright{} 1994, 1995 Board of Trustees, University of Illinois.
@@ -63,25 +73,35 @@
 
 @titlepage
 @title XEmacs Internals Manual
-@subtitle Version 1.4, March 2001
+@subtitle Version 21.5, October 2004
 
 @author Ben Wing
+@sp 1
+
+Improvements by
+
+@sp 1
+
+@author Stephen Turnbull
 @author Martin Buchholz
 @author Hrvoje Niksic
 @author Matthias Neubauer
 @author Olivier Galibert
+@author Andy Piper
+
+
 @page
 @vskip 0pt plus 1fill
 
 @noindent
-Copyright @copyright{} 1992 - 1996, 2001 Ben Wing. @*
-Copyright @copyright{} 1996, 1997 Sun Microsystems, Inc. @*
-Copyright @copyright{} 1994 - 1998 Free Software Foundation. @*
+Copyright @copyright{} 1992 - 2004 Ben Wing. @*
+Copyright @copyright{} 1996, 1997 Sun Microsystems. @*
+Copyright @copyright{} 1994 - 1998, 2002, 2003 Free Software Foundation. @*
 Copyright @copyright{} 1994, 1995 Board of Trustees, University of Illinois.
 
 @sp 2
-Version 1.4 @*
-March 2001.@*
+Version 21.5 @*
+October 2004.@*
 
 Permission is granted to make and distribute verbatim copies of this
 manual provided the copyright notice and this permission notice are
@@ -102,49 +122,137 @@
 @end titlepage
 @page
 
-@node Top, A History of Emacs, (dir), (dir)
+@node Top, Introduction, (dir), (dir)
 
 @ifinfo
-This Info file contains v1.4 of the XEmacs Internals Manual, March 2001.
+This Info file contains v21.5 of the XEmacs Internals Manual, October 2004.
 @end ifinfo
 
+@c Don't update this by hand!!!!!!
+@c Use C-u C-c C-u m (aka C-u M-x texinfo-master-list).
+@c NOTE: This command does not include the Index:: menu entry.
+@c You must add it by hand.
+
+@c Here are some useful Lisp routines for quickly Texinfo-izing text that
+@c has been formatted into ASCII lists and tables.  The first routine is
+@c currently more general and well-developed than the second.
+
+@c (defun list-to-texinfo (b e)
+@c   "Convert the selected region from an ASCII list to a Texinfo list."
+@c   (interactive "r")
+@c   (save-restriction
+@c     (narrow-to-region b e)
+@c     (goto-char (point-min))
+@c     (let ((dash-type "^ *-+ +")
+@c 	  (num-type "^ *[[(]?\\([0-9]+\\|[a-z]\\)[]).] +")
+@c 	  dash)
+@c       (save-excursion
+@c 	(cond ((re-search-forward num-type nil t))
+@c 	      ((re-search-forward dash-type nil t) (setq dash t))
+@c 	      (t (error "No table entries?"))))
+@c       (if dash (insert "@itemize @bullet\n")
+@c 	(insert "@enumerate\n"))
+@c       (while (re-search-forward (if dash dash-type num-type) nil t)
+@c 	(let ((p (point)))
+@c 	  (or (re-search-forward (if dash dash-type num-type) nil t)
+@c 	      (goto-char (point-max)))
+@c 	  (beginning-of-line)
+@c 	  (forward-line -1)
+@c 	  (let ((q (point)))
+@c 	    (goto-char p)
+@c 	    (kill-rectangle p q))
+@c 	  (insert "@item\n")))
+@c       (goto-char (point-max))
+@c       (beginning-of-line)
+@c       (if dash (insert "@end itemize\n")
+@c 	(insert "@end enumerate\n")))))
+
+@c (defun table-to-texinfo (b e)
+@c   "Convert the selected region from an ASCII table to a Texinfo table."
+@c   (interactive "r")
+@c   (save-restriction
+@c     (narrow-to-region b e)
+@c     (goto-char (point-min))
+@c     (insert "@table @code\n")
+@c     (while (not (eobp))
+@c       (insert "@item ")
+@c       (forward-sexp)
+@c       (delete-char)
+@c       (insert "\n")
+@c       (or (search-forward "\n\n" nil t)
+@c 	  (goto-char (point-max))))
+@c     (beginning-of-line)
+@c     (insert "@end table\n")))
+
+@c A useful Lisp routine for adding markup based on conventions used in plain
+@c text files; see doc string below.
+
+@c (defun convert-text-to-texinfo (&optional no-narrow)
+@c   "Convert text to Texinfo.
+@c If the region is active, do the region; otherwise, go from point to the end
+@c of the buffer.  This query-replaces for various kinds of conventions used
+@c in text: @code{} surrounded by ` and ' or followed by a (); @strong{}
+@c surrounded by *'s; @file{} something that looks like a file name."
+@c   (interactive)
+@c   (if (region-active-p)
+@c       (save-restriction
+@c 	(narrow-to-region (region-beginning) (region-end))
+@c 	(convert-comments-to-texinfo t))
+@c     (let ((p (point))
+@c 	  (case-replace nil))
+@c       (query-replace-regexp "`\\([^']+\\)'\\([^']\\)" "@code{\\1}\\2" nil)
+@c       (goto-char p)
+@c       (query-replace-regexp "\\(\\Sw\\)\\*\\(\\(?:\\s_\\|\\sw\\)+\\)\\*\\([^A-Za-z.}]\\)" "\\1@strong{\\2}\\3" nil)
+@c       (goto-char p)
+@c       (query-replace-regexp "\\(\\(\\s_\\|\\sw\\)+()\\)\\([^}]\\)" "@code{\\1}\\3" nil)
+@c       (goto-char p)
+@c       (query-replace-regexp "\\(\\(\\s_\\|\\sw\\)+\\.[A-Za-z]+\\)\\([^A-Za-z.}]\\)" "@file{\\1}\\3" nil)
+@c       )))
+
 @menu
+* Introduction::                Overview of this manual.
+* Authorship of XEmacs::        
 * A History of Emacs::          Times, dates, important events.
 * XEmacs From the Outside::     A broad conceptual overview.
 * The Lisp Language::           An overview.
-* XEmacs From the Perspective of Building::
-* Build-Time Dependencies::
-* XEmacs From the Inside::
-* The XEmacs Object System (Abstractly Speaking)::
-* How Lisp Objects Are Represented in C::
-* Major Textual Changes::
-* Rules When Writing New C Code::
-* Regression Testing XEmacs::
-* CVS Techniques::
-* A Summary of the Various XEmacs Modules::
-* Allocation of Objects in XEmacs Lisp::
-* Dumping::
-* Events and the Event Loop::
-* Evaluation; Stack Frames; Bindings::
-* Symbols and Variables::
-* Buffers and Textual Representation::
-* MULE Character Sets and Encodings::
-* The Lisp Reader and Compiler::
-* Lstreams::
-* Consoles; Devices; Frames; Windows::
-* The Redisplay Mechanism::
-* Extents::
-* Faces::
-* Glyphs::
-* Specifiers::
-* Menus::
-* Subprocesses::
-* Interface to the X Window System::
-* Index::
+* XEmacs From the Perspective of Building::  
+* Build-Time Dependencies::     
+* XEmacs From the Inside::      
+* The XEmacs Object System (Abstractly Speaking)::  
+* How Lisp Objects Are Represented in C::  
+* Major Textual Changes::       
+* Rules When Writing New C Code::  
+* Regression Testing XEmacs::   
+* CVS Techniques::              
+* The Modules of XEmacs::       
+* Allocation of Objects in XEmacs Lisp::  
+* Dumping::                     
+* Events and the Event Loop::   
+* Asynchronous Events; Quit Checking::  
+* Evaluation; Stack Frames; Bindings::  
+* Symbols and Variables::       
+* Buffers::                     
+* Text::                        
+* Multilingual Support::        
+* The Lisp Reader and Compiler::  
+* Lstreams::                    
+* Consoles; Devices; Frames; Windows::  
+* The Redisplay Mechanism::     
+* Extents::                     
+* Faces::                       
+* Glyphs::                      
+* Specifiers::                  
+* Menus::                       
+* Subprocesses::                
+* Interface to MS Windows::     
+* Interface to the X Window System::  
+* Future Work::                 
+* Future Work Discussion::      
+* Old Future Work::             
+* Index::                       
 
 @detailmenu
-
---- The Detailed Node Listing ---
+ --- The Detailed Node Listing ---
 
 A History of Emacs
 
@@ -154,140 +262,186 @@
 * GNU Emacs 20::                The other version 20 Emacs.
 * XEmacs::                      The continuation of Lucid Emacs.
 
+Major Textual Changes
+
+* Great Integral Type Renaming::  
+* Text/Char Type Renaming::     
+
 Rules When Writing New C Code
 
-* General Coding Rules::
-* Writing Lisp Primitives::
-* Writing Good Comments::
-* Adding Global Lisp Variables::
-* Proper Use of Unsigned Types::
-* Coding for Mule::
-* Techniques for XEmacs Developers::
-
-Coding for Mule
-
-* Character-Related Data Types::
-* Working With Character and Byte Positions::
-* Conversion to and from External Data::
-* General Guidelines for Writing Mule-Aware Code::
-* An Example of Mule-Aware Code::
-
-CVS Techniques
-
-* Merging a Branch into the Trunk::
+* A Reader's Guide to XEmacs Coding Conventions::  
+* General Coding Rules::        
+* Object-Oriented Techniques for C::  
+* Writing Lisp Primitives::     
+* Writing Good Comments::       
+* Adding Global Lisp Variables::  
+* Writing Macros::              
+* Proper Use of Unsigned Types::  
+* Techniques for XEmacs Developers::  
 
 Regression Testing XEmacs
 
-A Summary of the Various XEmacs Modules
-
-* Low-Level Modules::
-* Basic Lisp Modules::
-* Modules for Standard Editing Operations::
-* Editor-Level Control Flow Modules::
-* Modules for the Basic Displayable Lisp Objects::
-* Modules for other Display-Related Lisp Objects::
-* Modules for the Redisplay Mechanism::
-* Modules for Interfacing with the File System::
-* Modules for Other Aspects of the Lisp Interpreter and Object System::
-* Modules for Interfacing with the Operating System::
-* Modules for Interfacing with X Windows::
-* Modules for Internationalization::
-* Modules for Regression Testing::
+* How to Regression-Test::      
+* Modules for Regression Testing::  
+
+CVS Techniques
+
+* Merging a Branch into the Trunk::  
+
+The Modules of XEmacs
+
+* A Summary of the Various XEmacs Modules::  
+* Low-Level Modules::           
+* Basic Lisp Modules::          
+* Modules for Standard Editing Operations::  
+* Modules for Interfacing with the File System::  
+* Modules for Other Aspects of the Lisp Interpreter and Object System::  
+* Modules for Interfacing with the Operating System::  
 
 Allocation of Objects in XEmacs Lisp
 
-* Introduction to Allocation::
-* Garbage Collection::
-* GCPROing::
-* Garbage Collection - Step by Step::
-* Integers and Characters::
-* Allocation from Frob Blocks::
-* lrecords::
-* Low-level allocation::
-* Cons::
-* Vector::
-* Bit Vector::
-* Symbol::
-* Marker::
-* String::
-* Compiled Function::
+* Introduction to Allocation::  
+* Garbage Collection::          
+* GCPROing::                    
+* Garbage Collection - Step by Step::  
+* Integers and Characters::     
+* Allocation from Frob Blocks::  
+* lrecords::                    
+* Low-level allocation::        
+* Cons::                        
+* Vector::                      
+* Bit Vector::                  
+* Symbol::                      
+* Marker::                      
+* String::                      
+* Compiled Function::           
 
 Garbage Collection - Step by Step
 
-* Invocation::
-* garbage_collect_1::
-* mark_object::
-* gc_sweep::
-* sweep_lcrecords_1::
-* compact_string_chars::
-* sweep_strings::
-* sweep_bit_vectors_1::
+* Invocation::                  
+* garbage_collect_1::           
+* mark_object::                 
+* gc_sweep::                    
+* sweep_lcrecords_1::           
+* compact_string_chars::        
+* sweep_strings::               
+* sweep_bit_vectors_1::         
 
 Dumping
 
-* Overview::
-* Data descriptions::
-* Dumping phase::
-* Reloading phase::
+* Dumping Justification::       
+* Overview::                    
+* Data descriptions::           
+* Dumping phase::               
+* Reloading phase::             
+* Remaining issues::            
 
 Dumping phase
 
-* Object inventory::
-* Address allocation::
-* The header::
-* Data dumping::
-* Pointers dumping::
+* Object inventory::            
+* Address allocation::          
+* The header::                  
+* Data dumping::                
+* Pointers dumping::            
 
 Events and the Event Loop
 
-* Introduction to Events::
-* Main Loop::
-* Specifics of the Event Gathering Mechanism::
-* Specifics About the Emacs Event::
-* The Event Stream Callback Routines::
-* Other Event Loop Functions::
-* Converting Events::
-* Dispatching Events; The Command Builder::
+* Introduction to Events::      
+* Main Loop::                   
+* Specifics of the Event Gathering Mechanism::  
+* Specifics About the Emacs Event::  
+* Event Queues::                
+* Event Stream Callback Routines::  
+* Other Event Loop Functions::  
+* Stream Pairs::                
+* Converting Events::           
+* Dispatching Events; The Command Builder::  
+* Focus Handling::              
+* Editor-Level Control Flow Modules::  
+
+Asynchronous Events; Quit Checking
+
+* Signal Handling::             
+* Control-G (Quit) Checking::   
+* Profiling::                   
+* Asynchronous Timeouts::       
+* Exiting::                     
 
 Evaluation; Stack Frames; Bindings
 
-* Evaluation::
-* Dynamic Binding; The specbinding Stack; Unwind-Protects::
-* Simple Special Forms::
-* Catch and Throw::
+* Evaluation::                  
+* Dynamic Binding; The specbinding Stack; Unwind-Protects::  
+* Simple Special Forms::        
+* Catch and Throw::             
 
 Symbols and Variables
 
-* Introduction to Symbols::
-* Obarrays::
-* Symbol Values::
-
-Buffers and Textual Representation
+* Introduction to Symbols::     
+* Obarrays::                    
+* Symbol Values::               
+
+Buffers
 
 * Introduction to Buffers::     A buffer holds a block of text such as a file.
-* The Text in a Buffer::        Representation of the text in a buffer.
 * Buffer Lists::                Keeping track of all buffers.
 * Markers and Extents::         Tagging locations within a buffer.
+* The Buffer Object::           The Lisp object corresponding to a buffer.
+
+Text
+
+* The Text in a Buffer::        Representation of the text in a buffer.
 * Ibytes and Ichars::           Representation of individual characters.
-* The Buffer Object::           The Lisp object corresponding to a buffer.
+* Byte-Char Position Conversion::  
 * Searching and Matching::      Higher-level algorithms.
 
-MULE Character Sets and Encodings
-
-* Character Sets::
-* Encodings::
-* Internal Mule Encodings::
-* CCL::
+Multilingual Support
+
+* Introduction to Multilingual Issues #1::  
+* Introduction to Multilingual Issues #2::  
+* Introduction to Multilingual Issues #3::  
+* Introduction to Multilingual Issues #4::  
+* Character Sets::              
+* Encodings::                   
+* Internal Mule Encodings::     
+* Byte/Character Types; Buffer Positions; Other Typedefs::  
+* Internal Text API's::         
+* Coding for Mule::             
+* CCL::                         
+* Modules for Internationalization::  
 
 Encodings
 
-* Japanese EUC (Extended Unix Code)::
-* JIS7::
+* Japanese EUC (Extended Unix Code)::  
+* JIS7::                        
 
 Internal Mule Encodings
 
-* Internal String Encoding::
-* Internal Character Encoding::
+* Internal String Encoding::    
+* Internal Character Encoding::  
+
+Byte/Character Types; Buffer Positions; Other Typedefs
+
+* Byte Types::                  
+* Different Ways of Seeing Internal Text::  
+* Buffer Positions::            
+* Other Typedefs::              
+* Usage of the Various Representations::  
+* Working With the Various Representations::  
+
+Internal Text API's
+
+* Basic internal-format API's::  
+* The DFC API::                 
+* The Eistring API::            
+
+Coding for Mule
+
+* Character-Related Data Types::  
+* Working With Character and Byte Positions::  
+* Conversion to and from External Data::  
+* General Guidelines for Writing Mule-Aware Code::  
+* An Example of Mule-Aware Code::  
+* Mule-izing Code::             
 
 Lstreams
 
@@ -298,16 +452,19 @@
 
 Consoles; Devices; Frames; Windows
 
-* Introduction to Consoles; Devices; Frames; Windows::
-* Point::
-* Window Hierarchy::
-* The Window Object::
+* Introduction to Consoles; Devices; Frames; Windows::  
+* Point::                       
+* Window Hierarchy::            
+* The Window Object::           
+* Modules for the Basic Displayable Lisp Objects::  
 
 The Redisplay Mechanism
 
-* Critical Redisplay Sections::
-* Line Start Cache::
-* Redisplay Piece by Piece::
+* Critical Redisplay Sections::  
+* Line Start Cache::            
+* Redisplay Piece by Piece::    
+* Modules for the Redisplay Mechanism::  
+* Modules for other Display-Related Lisp Objects::  
 
 Extents
 
@@ -318,10 +475,306 @@
 * Mathematics of Extent Ordering::  A rigorous foundation.
 * Extent Fragments::            Cached information useful for redisplay.
 
+Interface to MS Windows
+
+* Different kinds of Windows environments::  
+* Windows Build Flags::         
+* Windows I18N Introduction::   
+* Modules for Interfacing with MS Windows::  
+
+Interface to the X Window System
+
+* Lucid Widget Library::        An interface to various widget sets.
+* Modules for Interfacing with X Windows::  
+
+Lucid Widget Library
+
+* Generic Widget Interface::    The lwlib generic widget interface.
+* Scrollbars::                  
+* Menubars::                    
+* Checkboxes and Radio Buttons::  
+* Progress Bars::               
+* Tab Controls::                
+
+Future Work
+
+* Future Work -- Elisp Compatibility Package::  
+* Future Work -- Drag-n-Drop::  
+* Future Work -- Standard Interface for Enabling Extensions::  
+* Future Work -- Better Initialization File Scheme::  
+* Future Work -- Keyword Parameters::  
+* Future Work -- Property Interface Changes::  
+* Future Work -- Toolbars::     
+* Future Work -- Menu API Changes::  
+* Future Work -- Removal of Misc-User Event Type::  
+* Future Work -- Mouse Pointer::  
+* Future Work -- Extents::      
+* Future Work -- Version Number and Development Tree Organization::  
+* Future Work -- Improvements to the @code{xemacs.org} Website::  
+* Future Work -- Keybindings::  
+* Future Work -- Byte Code Snippets::  
+* Future Work -- Lisp Stream API::  
+* Future Work -- Multiple Values::  
+* Future Work -- Macros::       
+* Future Work -- Specifiers::   
+* Future Work -- Display Tables::  
+* Future Work -- Making Elisp Function Calls Faster::  
+* Future Work -- Lisp Engine Replacement::  
+
+Future Work -- Toolbars
+
+* Future Work -- Easier Toolbar Customization::  
+* Future Work -- Toolbar Interface Changes::  
+
+Future Work -- Mouse Pointer
+
+* Future Work -- Abstracted Mouse Pointer Interface::  
+* Future Work -- Busy Pointer::  
+
+Future Work -- Extents
+
+* Future Work -- Everything should obey duplicable extents::  
+
+Future Work -- Keybindings
+
+* Future Work -- Keybinding Schemes::  
+* Future Work -- Better Support for Windows Style Key Bindings::  
+* Future Work -- Misc Key Binding Ideas::  
+
+Future Work -- Byte Code Snippets
+
+* Future Work -- Autodetection::  
+* Future Work -- Conversion Error Detection::  
+* Future Work -- BIDI Support::  
+* Future Work -- Localized Text/Messages::  
+
+Future Work -- Lisp Engine Replacement
+
+* Future Work -- Lisp Engine Discussion::  
+* Future Work -- Lisp Engine Replacement -- Implementation::  
+
+Future Work Discussion
+
+* Discussion -- garbage collection::  
+* Discussion -- glyphs::        
+
+Old Future Work
+
+* Future Work -- A Portable Unexec Replacement::  
+* Future Work -- Indirect Buffers::  
+* Future Work -- Improvements in support for non-ASCII (European) keysyms under X::  
+* Future Work -- xemacs.org Mailing Address Changes::  
+* Future Work -- Lisp callbacks from critical areas of the C code::  
+
 @end detailmenu
 @end menu
 
-@node A History of Emacs, XEmacs From the Outside, Top, Top
+@node Introduction, Authorship of XEmacs, Top, Top
+@chapter Introduction
+@cindex introduction
+@cindex authorship, manual
+
+This manual documents the internals of XEmacs.  It presumes knowledge of
+how to use XEmacs (@pxref{Top,,, xemacs, XEmacs User's Manual}), and
+especially, knowledge of XEmacs Lisp (@pxref{Top,,, lispref, XEmacs Lisp
+Reference Manual}).  Information in either of these manuals will not be
+repeated here, and some information in the Lisp Reference Manual in
+particular is more relevant to a person working on the internals than
+the average XEmacs Lisp programmer. (In such cases, a cross-reference is
+usually made to the Lisp Reference Manual.)
+
+Ideally, this manual would be complete and up-to-date.  Unfortunately,
+in reality it is neither, due to the limited resources of the
+maintainers of XEmacs. (That said, it is much better than the internal
+documentation of most programs.) Also, much information about the
+internals is documented only in the code itself, in the form of
+comments.  Furthermore, since the maintainers are more likely to be
+working on the code than on this manual, information contained in
+comments may be more up-to-date than information in this manual.  Do not
+assume that all information in this manual is necessarily accurate as of
+the snapshot of the code you are looking at, and in the case of
+contradictions between the code comments and the manual, @strong{always}
+assume that the code comments are correct. (Because of the proximity of
+the comments to the code, comments will rarely be out-of-date.)
+
+This manual was primarily written by Ben Wing.  Certain sections were
+written by others, including those mentioned on the title page as well
+as other coders.  Some sections were lifted directly from comments in
+the code, and in those cases we may not completely be aware of the
+authorship.  In addition, due to the collaborative nature of XEmacs,
+many people have made small changes and emendations as they have
+discovered problems.
+
+The following is a (necessarily incomplete) list of the work that was
+@emph{not} done by Ben Wing (for more complete information, take a look
+at the ChangeLog for the @file{man} directory and the CVS records of
+actual changes):
+
+@table @asis
+@item Stephen Turnbull
+Various cleanup work, mostly post-2000.  Object-Oriented Techniques in
+XEmacs.  A Reader's Guide to XEmacs Coding Conventions.  Searching and
+Matching.  Regression Testing XEmacs.  Modules for Regression Testing.
+Lucid Widget Library.
+@item Martin Buchholz
+Various cleanup work, mostly pre-2001.  Docs on inline functions.  Docs
+on dfc conversion functions (Conversion to and from External Data).
+Improvements in support for non-ASCII (European) keysyms under X.
+@item Hrvoje Niksic
+Coding for Mule.
+@item Matthias Neubauer
+Garbage Collection - Step by Step.
+@item Olivier Galibert
+Portable dumper documentation.
+@item Andy Piper
+Redisplay Piece by Piece.  Glyphs.
+@item Chuck Thompson
+Line Start Cache.
+@item Kenichi Handa
+CCL.
+@end table
+
+@node Authorship of XEmacs, A History of Emacs, Introduction, Top
+@chapter Authorship of XEmacs
+@cindex authorship, XEmacs
+
+General authorship in chronological order:
+
+@table @asis
+
+@item Jamie Zawinski, Eric Benson, Matthieu Devin, Harlan Sexton
+These were the early creators of Lucid Emacs, the predecessor of Xemacs.
+Jamie Zawinski was the primary maintainer and coder for Lucid Emacs—
+active between early 1991 and June 1994.  He presided over versions 19.0
+through 19.10, and then abruptly left for Netscape.  He wrote the
+advanced stream code, the Xt interface code, the byte compiler, the
+original version of the X selection code, the first, second and third
+versions of the face code which appeared in 19.0, 19.6 and 19.9
+respectively.  Part of the keymap code separated the Lisp directories
+into many subdirectories and many smaller changes.  Matthieu Devin wrote
+the original version of the Extents code.  Someone else at Lucid wrote
+the Lucid widget library (LWLIB), with the exception of the scrollbar
+code, which was added later.
+
+@item Richard Mlynarik
+Active 1991 to 1993, author of much of the current Lisp object scheme,
+including Lrecords and LC records (added this support in 1993 to allow
+for 28-bit pointers, which had previously been restricted to 26 bits.)
+Moved the minibuffer and abbreve code into Lisp, worked on the keymap
+code and did the initial synching between Xemacs and the first released
+version of GNU Emacs version 19 in mid-1993.
+
+@item Martin Buchholz
+Active 1995 to 2001, maintainer of Xemacs late 1999 to ?, author of the
+current configure support, mini optimizations to the byte interpreter,
+many improvements to the case changing code and many bug fixes to the
+process and system-specific code, also general spell checking and code
+cleanliness guru.
+
+@item Steve Baur
+Maintainer of Xemacs 1996 to 1999, responsible for many improvements to
+the Xemacs development process, for example, creation of the review
+board and arranging for Xemacs to be placed under CVS.  Author of the
+package code.
+
+@item Chuck Thompson
+Active January 1993 to June of 1996, author of the current and previous
+ve3rsions of the redisplay code and maintainer of Xemacs from mid-1994
+to mid-1996.  Creator of XEMacs.org.  Also wrote the scrollbar code, the
+original configure support, and prototype versions of the toolbar and
+device code.
+
+@item Ben Wing
+Active April 1993 to April 1996 and February 2000 to present.  Chief
+coder for Xemacs between 1994 and 1996.  Ben Wing was never the
+maintainer of Xemacs, and as a result, is the author of more of the
+Xemacs specific code in Xemacs than anyone else. Author of the mule
+support (Extense code), the glis-phonetically spelled-and specifiers
+code most of the toolbars, and device distraction code, the error
+checking code, the Lstream code, the bit vector, char-table, and
+range-table code, much of the current Xt code, much, much of the events
+code (including most of the TTY event code), some of the phase code, and
+numerous other aspects of the code.  Also author of most of the Xemacs
+documentation including the internals manual and the Xemacs editions to
+the Lisp reference manual, and responsible for much of the synching
+between Xemacs and GNU Emacs.
+
+@item Kyle Jones
+Author of the minimal tag bits support in—minimal lisp support for lisp
+objects which allows for 32-bit pointers and 31-bit integers.
+
+@item Olivier Galibert
+Author of the portable dumping mechanism.
+
+@item Andy Piper
+Author of the widget support, the gutter support and much of the
+Microsoft Windows support.
+
+@item Kirill Katsnelson
+Author of many improvements to Microsoft Windows support, the current
+sub-process code, and revamping of the display size change mechanism.
+
+@item Jonathan Harris
+Author of much of the Microsoft Windows support.
+@end table
+
+Authorship of some of the modules:
+
+@table @file
+@item alloc.c
+Inherited 1991 from a prototype of GNU Emacs 19.  Around mid-1993
+Richard Mlynarik redid much of the code, creating the existing system of
+object abstractions, (where each object can define its own marking
+method, printing method, and so on) and the existing scheme of Lrecords
+and LC records.  This was done both to increase the number of bits that
+a pointer can occupy from 26 to 28, and provide a general framework for
+creating new object types easily.  The garbage collection and
+froblock-phonetically spelled-allocation code is left over from the
+original version, but was cleaned up somewhat by Mlynarik.  Later in
+1993, Jamie Zawinski improved the code that kept track of pure space
+usage so it would report exactly where you exceeded the pure space and
+how much pure space you are going to have to add to get everything to
+fit.  He also added code to issue nice pure space and garbage
+collections statistics at the end of dumping.  Early in 1995, Ben Wing
+cleaned up the froblock code to be as compact as possible, added the
+various bits of error checking, which are controlled using the
+_ErrorCheck*.  He also added the ability of strings to be resized, which
+is necessary under MULE, because you can replace one character in a
+string with another character of a different size.  As a result, the
+string resizes.  Ben Wing also added bit factors for 1913 around
+September 1995, and Elsie record lists for 1914 around December 1995.
+Steve Baur did some work on the purification and dump time code, and
+added Doug Lea Malloc support from Emacs 20.2 circa 1998.  Kyle Jones
+continued to work done by Mlynarik, reducing the number of primitive
+Lisp types so that there are only three: integer character and pointer
+type, which encompasses all other types.  This allows for 31-bit
+integers and 32-bit pointers, although there is potential slowdown in
+some extra in directions when determining the type of an object, and
+some memory increase for the objects that previously were considered to
+be the most primitive types.  Martin Buchholz has recently (February
+2000) done some work to eliminate most of the slowdown.
+
+Olivier Galibert, mid-1999 to 2000, implemented the portable
+dumper.  This writes out the state of the Lisp object heap to
+disk file in a real locatable fashion so that it can later be
+read in at any memory location.  This work entails a number of
+changes in Alec.C.  For example, pure space was removed and
+structures were created to define the types of all the elements
+contained in the various lisp object structures and associated
+structures.
+
+@item alloca.c
+Inherited a long time ago from a prerelease version of GNU Emacs 19,
+kept in sync with more recent versions very few changes from Xemacs.
+Most changes consist of converting the code to ANSI C, and fixing up the
+includes at the top of the file to follow Xemacs conventions.
+
+@item alloca.s
+Inherited almost unchanged from FSF kept in sync up through 19.30
+basically no changes for Xemacs.
+@end table
+
+@node A History of Emacs, XEmacs From the Outside, Authorship of XEmacs, Top
 @chapter A History of Emacs
 @cindex history of Emacs, a
 @cindex Emacs, a history of
@@ -360,20 +813,44 @@
 * XEmacs::                      The continuation of Lucid Emacs.
 @end menu
 
-@node Through Version 18
+@node Through Version 18, Lucid Emacs, A History of Emacs, A History of Emacs
 @section Through Version 18
 @cindex version 18, through
 @cindex Gosling, James
 @cindex Great Usenet Renaming
 
-  Although the history of the early versions of GNU Emacs is unclear,
-the history is well-known from the middle of 1985.  A time line is:
+As described above, Emacs began life in the mid-1970's as a series of
+editor macros for TECO, an early editor on the PDP-10.  In the early
+1980's it was rewritten in C as a collaboration between Richard
+M. Stallman (RMS) and James Gosling (the creator of Java); its extension
+language was known as @dfn{Mocklisp}.  This version of Emacs-in-C formed
+the basis for the early versions of GNU Emacs and also for Gosling's
+Unipress Emacs, a commercial product.  Because of bad blood between the
+two over the issue of commercialism, RMS pretty much disowned this
+collaboration, referring to it as "Gosling Emacs".
+
+At this point we pick up with a time line of events. (A broader timeline
+is available at @uref{http://http://www.jwz.org/doc/emacs-timeline.html,
+``Emacs Timeline''}.)
 
 @itemize @bullet
 @item
-GNU Emacs version 15 (15.34) was released sometime in 1984 or 1985 and
-shared some code with a version of Emacs written by James Gosling (the
-same James Gosling who later created the Java language).
+Unipress Emacs, a $395 commercial product, was released on May 6, 1983.
+This was an outgrowth of the Emacs-in-C collaboration written by Gosling
+and RMS.
+
+@item
+GNU Emacs version 13.0? was released on March 20, 1985.  This may have
+been the initial public release.  This was also based on this same
+Emacs-in-C collaboration.
+
+@item
+GNU Emacs version 15.10 was released on April 11, 1985.
+
+@item
+GNU Emacs version 15.34 was released on May 7, 1985.  This appears
+to be the last release of version 15.
+
 @item
 GNU Emacs version 16 (first released version was 16.56) was released on
 July 15, 1985.  All Gosling code was removed due to potential copyright
@@ -474,7 +951,7 @@
 version 18.59 released October 31, 1992.
 @end itemize
 
-@node Lucid Emacs
+@node Lucid Emacs, GNU Emacs 19, Through Version 18, A History of Emacs
 @section Lucid Emacs
 @cindex Lucid Emacs
 @cindex Lucid Inc.
@@ -540,7 +1017,7 @@
 version 19.10 released May 27, 1994.
 @end itemize
 
-@node GNU Emacs 19
+@node GNU Emacs 19, GNU Emacs 20, Lucid Emacs, A History of Emacs
 @section GNU Emacs 19
 @cindex GNU Emacs 19
 @cindex Emacs 19, GNU
@@ -619,7 +1096,7 @@
 working on and using GNU Emacs for a long time (back as far as version
 16 or 17).
 
-@node GNU Emacs 20
+@node GNU Emacs 20, XEmacs, GNU Emacs 19, A History of Emacs
 @section GNU Emacs 20
 @cindex GNU Emacs 20
 @cindex Emacs 20, GNU
@@ -640,7 +1117,7 @@
 version 20.3 released August 19, 1998.
 @end itemize
 
-@node XEmacs
+@node XEmacs,  , GNU Emacs 20, A History of Emacs
 @section XEmacs
 @cindex XEmacs
 
@@ -1289,7 +1766,7 @@
 @end enumerate
 
 Put these together and you'll see it's perfectly acceptable to build
-auto-autoloads *after* dumping if no @file{.elc} files are out-of-date.
+auto-autoloads @strong{after} dumping if no @file{.elc} files are out-of-date.
 @end quotation
 
 These Lisp driver programs typically run from temacs, not a dumped
@@ -1950,11 +2427,11 @@
 type renaming".
 
 @menu
-* Great Integral Type Renaming::
-* Text/Char Type Renaming::
+* Great Integral Type Renaming::  
+* Text/Char Type Renaming::     
 @end menu
 
-@node Great Integral Type Renaming
+@node Great Integral Type Renaming, Text/Char Type Renaming, Major Textual Changes, Major Textual Changes
 @section Great Integral Type Renaming
 @cindex Great Integral Type Renaming
 @cindex integral type renaming, great
@@ -1988,7 +2465,7 @@
 @item
 All such quantity types just mentioned boil down to EMACS_INT, which is
 32 bits on 32-bit machines and 64 bits on 64-bit machines.  This is
-guaranteed to be the same size as Lisp objects of type `int', and (as
+guaranteed to be the same size as Lisp objects of type @code{int}, and (as
 far as I can tell) of size_t (unsigned!) and ssize_t.  The only type
 below that is not an EMACS_INT is Hashcode, which is an unsigned value
 of the same size as EMACS_INT.
@@ -2070,7 +2547,7 @@
 
 @enumerate
 @item
-in lisp.h, removed duplicate declarations of Bytecount.  The changed
+in @file{lisp.h}, removed duplicate declarations of Bytecount.  The changed
 code should now look like this: (In each code snippet below, the first
 and last lines are the same as the original, as are all lines outside of
 those lines.  That allows you to locate the section to be replaced, and
@@ -2094,7 +2571,7 @@
 @end example
 
 @item 
-in lstream.h, removed duplicate declaration of Bytecount.  Rewrote the
+in @file{lstream.h}, removed duplicate declaration of Bytecount.  Rewrote the
 comment about this type.  The changed code should now look like this:
 
 @example
@@ -2103,7 +2580,7 @@
 
 /* The have been some arguments over the what the type should be that
    specifies a count of bytes in a data block to be written out or read in,
-   using Lstream_read(), Lstream_write(), and related functions.
+   using @code{Lstream_read()}, @code{Lstream_write()}, and related functions.
    Originally it was long, which worked fine; Martin "corrected" these to
    size_t and ssize_t on the grounds that this is theoretically cleaner and
    is in keeping with the C standards.  Unfortunately, this practice is
@@ -2121,7 +2598,7 @@
    bytes actually read to or written from in an operation, and these
    functions can return -1 to signal error.
 
-   Note that the standard Unix read() and write() functions define the
+   Note that the standard Unix @code{read()} and @code{write()} functions define the
    count going in as a size_t, which is UNSIGNED, and the count going
    out as an ssize_t, which is SIGNED.  This is a horrible design
    flaw.  Not only is it highly likely to lead to logic errors when a
@@ -2140,13 +2617,13 @@
 @end example
 
 @item
-in dumper.c, there are four places, all inside of switch() statements,
+in @file{dumper.c}, there are four places, all inside of @code{switch()} statements,
 where XD_BYTECOUNT appears twice as a case tag.  In each case, the two
 case blocks contain identical code, and you should *REMOVE THE SECOND*
 and leave the first.
 @end enumerate
 
-@node Text/Char Type Renaming
+@node Text/Char Type Renaming,  , Great Integral Type Renaming, Major Textual Changes
 @section Text/Char Type Renaming
 @cindex Text/Char Type Renaming
 @cindex type renaming, text/char
@@ -2211,7 +2688,7 @@
 just merge all the textual changes directly.  Use something like this:
 
 (WARNING: I'm not a CVS guru; before trying this, or any large operation
-that might potentially mess things up, *DEFINITELY* make a backup of
+that might potentially mess things up, @strong{DEFINITELY} make a backup of
 your existing workspace.)
 
 @example
@@ -2237,7 +2714,7 @@
 
 # Evidently Perl considers _ to be a word char ala \b, even though XEmacs
 # doesn't.  We need to be careful here with ibyte/ichar because of words
-# like Richard, eicharlen(), multibyte, HIBYTE, etc.
+# like Richard, @code{eicharlen()}, multibyte, HIBYTE, etc.
 
 gr Ibyte Intbyte $files
 gr '\bIBYTE' INTBYTE $files
@@ -2277,18 +2754,20 @@
 situations, often in code far away from where the actual breakage is.
 
 @menu
-* A Reader's Guide to XEmacs Coding Conventions::
-* General Coding Rules::
-* Object-Oriented Techniques for C::
-* Writing Lisp Primitives::
-* Writing Good Comments::
-* Adding Global Lisp Variables::
-* Proper Use of Unsigned Types::
-* Coding for Mule::
-* Techniques for XEmacs Developers::
+* A Reader's Guide to XEmacs Coding Conventions::  
+* General Coding Rules::        
+* Object-Oriented Techniques for C::  
+* Writing Lisp Primitives::     
+* Writing Good Comments::       
+* Adding Global Lisp Variables::  
+* Writing Macros::              
+* Proper Use of Unsigned Types::  
+* Techniques for XEmacs Developers::  
 @end menu
 
-@node A Reader's Guide to XEmacs Coding Conventions
+See also @ref{Coding for Mule}.
+
+@node A Reader's Guide to XEmacs Coding Conventions, General Coding Rules, Rules When Writing New C Code, Rules When Writing New C Code
 @section A Reader's Guide to XEmacs Coding Conventions
 @cindex coding conventions
 @cindex reader's guide
@@ -2381,7 +2860,7 @@
 @code{DEFUN} macro.)
 
 
-@node General Coding Rules
+@node General Coding Rules, Object-Oriented Techniques for C, A Reader's Guide to XEmacs Coding Conventions, Rules When Writing New C Code
 @section General Coding Rules
 @cindex coding rules, general
 
@@ -2391,7 +2870,11 @@
 C++ compilers are more nit-picking, and a number of coding errors have
 been found by compiling with C++.  The ability to use both C and C++
 tools means that a greater variety of development tools are available to
-the developer.
+the developer.  In addition, the ability to overload operators in C++
+means it is possible, for error-checking purposes, to redefine certain
+simple types (normally defined as aliases for simple built-in types such
+as @code{unsigned char} or @code{long}) as classes, strictly limiting the permissible
+operations and catching illegal implicit casts and such.
 
 Every module includes @file{<config.h>} (angle brackets so that
 @samp{--srcdir} works correctly; @file{config.h} may or may not be in
@@ -2500,7 +2983,7 @@
 @code{LIST_LOOP_DELETE_IF} delete elements from a lisp list satisfying some
 predicate.
 
-@node Object-Oriented Techniques for C
+@node Object-Oriented Techniques for C, Writing Lisp Primitives, General Coding Rules, Rules When Writing New C Code
 @section Object-Oriented Techniques for C
 @cindex coding rules, object-oriented
 @cindex object-oriented techniques
@@ -2600,7 +3083,7 @@
 may be a rather large number of them.
 
 
-@node Writing Lisp Primitives
+@node Writing Lisp Primitives, Writing Good Comments, Object-Oriented Techniques for C, Rules When Writing New C Code
 @section Writing Lisp Primitives
 @cindex writing Lisp primitives
 @cindex Lisp primitives, writing
@@ -2738,7 +3221,7 @@
 concerns described above for @code{F...} names (in particular,
 underscores in the C arguments become dashes in the Lisp arguments).
 
-There is one additional kludge: A trailing `_' on the C argument is
+There is one additional kludge: A trailing @samp{_} on the C argument is
 discarded when forming the Lisp argument.  This allows C language
 reserved words (like @code{default}) or global symbols (like
 @code{dirname}) to be used as argument names without compiler warnings
@@ -2847,7 +3330,7 @@
 @file{lisp.h} contains the definitions for important macros and
 functions.
 
-@node Writing Good Comments
+@node Writing Good Comments, Adding Global Lisp Variables, Writing Lisp Primitives, Rules When Writing New C Code
 @section Writing Good Comments
 @cindex writing good comments
 @cindex comments, writing good
@@ -2910,7 +3393,7 @@
 To indicate a "todo" or other problem, use four pound signs --
 i.e. @samp{####}.
 
-@node Adding Global Lisp Variables
+@node Adding Global Lisp Variables, Writing Macros, Writing Good Comments, Rules When Writing New C Code
 @section Adding Global Lisp Variables
 @cindex global Lisp variables, adding
 @cindex variables, adding global Lisp
@@ -2979,7 +3462,64 @@
 Lisp object, and you will be the one who's unhappy when you can't figure
 out how your variable got overwritten.
 
-@node Proper Use of Unsigned Types
+@node Writing Macros, Proper Use of Unsigned Types, Adding Global Lisp Variables, Rules When Writing New C Code
+@section Writing Macros
+@cindex writing macros
+@cindex macros, writing
+
+The three golden rules of macros:
+
+@enumerate
+@item
+Anything that's an lvalue can be evaluated more than once.
+@item
+Macros where anything else can be evaluated more than once should
+have the word "unsafe" in their name (exceptions may be made for
+large sets of macros that evaluate arguments of certain types more
+than once, e.g. struct buffer * arguments, when clearly indicated in
+the macro documentation).  These macros are generally meant to be
+called only by other macros that have already stored the calling
+values in temporary variables.
+@item
+Nothing else can be evaluated more than once.  Use inline
+functions, if necessary, to prevent multiple evaluation.
+@end enumerate
+
+NOTE: The functions and macros below are given full prototypes in their
+docs, even when the implementation is a macro.  In such cases, passing
+an argument of a type other than expected will produce undefined
+results.  Also, given that macros can do things functions can't (in
+particular, directly modify arguments as if they were passed by
+reference), the declaration syntax has been extended to include the
+call-by-reference syntax from C++, where an & after a type indicates
+that the argument is an lvalue and is passed by reference, i.e. the
+function can modify its value. (This is equivalent in C to passing a
+pointer to the argument, but without the need to explicitly worry about
+pointers.)
+
+When to capitalize macros:
+
+@itemize @bullet
+@item
+Capitalize macros doing stuff obviously impossible with (C)
+functions, e.g. directly modifying arguments as if they were passed by
+reference.
+@item
+Capitalize macros that evaluate @strong{any} argument more than once regardless
+of whether that's "allowed" (e.g. buffer arguments).
+@item
+Capitalize macros that directly access a field in a Lisp_Object or
+its equivalent underlying structure.  In such cases, access through the
+Lisp_Object precedes the macro with an X, and access through the underlying
+structure doesn't.
+@item
+Capitalize certain other basic macros relating to Lisp_Objects; e.g.
+FRAMEP, CHECK_FRAME, etc.
+@item
+Try to avoid capitalizing any other macros.
+@end itemize
+
+@node Proper Use of Unsigned Types, Techniques for XEmacs Developers, Writing Macros, Rules When Writing New C Code
 @section Proper Use of Unsigned Types
 @cindex unsigned types, proper use of
 @cindex types, proper use of unsigned
@@ -3010,612 +3550,7 @@
 Other reasonable uses of @code{unsigned int} and @code{unsigned long}
 are representing non-quantities -- e.g. bit-oriented flags and such.
 
-@node Coding for Mule
-@section Coding for Mule
-@cindex coding for Mule
-@cindex Mule, coding for
-
-Although Mule support is not compiled by default in XEmacs, many people
-are using it, and we consider it crucial that new code works correctly
-with multibyte characters.  This is not hard; it is only a matter of
-following several simple user-interface guidelines.  Even if you never
-compile with Mule, with a little practice you will find it quite easy
-to code Mule-correctly.
-
-Note that these guidelines are not necessarily tied to the current Mule
-implementation; they are also a good idea to follow on the grounds of
-code generalization for future I18N work.
-
-@menu
-* Character-Related Data Types::
-* Working With Character and Byte Positions::
-* Conversion to and from External Data::
-* General Guidelines for Writing Mule-Aware Code::
-* An Example of Mule-Aware Code::
-* Mule-izing Code::
-@end menu
-
-@node Character-Related Data Types
-@subsection Character-Related Data Types
-@cindex character-related data types
-@cindex data types, character-related
-
-First, let's review the basic character-related datatypes used by
-XEmacs.  Note that some of the separate @code{typedef}s are not
-mandatory, but they improve clarity of code a great deal, because one
-glance at the declaration can tell the intended use of the variable.
-
-@table @code
-@item Ichar
-@cindex Ichar
-An @code{Ichar} holds a single Emacs character.
-
-Obviously, the equality between characters and bytes is lost in the Mule
-world.  Characters can be represented by one or more bytes in the
-buffer, and @code{Ichar} is a C type large enough to hold any
-character.  (This currently isn't quite true for ISO 10646, which
-defines a character as a 31-bit non-negative quantity, while XEmacs
-characters are only 30-bits.  This is irrelevant, unless you are
-considering using the ISO 10646 private groups to support really large
-private character sets---in particular, the Mule character set!---in
-a version of XEmacs using Unicode internally.)
-
-Without Mule support, an @code{Ichar} is equivalent to an
-@code{unsigned char}.  [[This doesn't seem to be true; @file{lisp.h}
-unconditionally @samp{typedef}s @code{Ichar} to @code{int}.]]
-
-@item Ibyte
-@cindex Ibyte
-The data representing the text in a buffer or string is logically a set
-of @code{Ibyte}s.
-
-XEmacs does not work with the same character formats all the time; when
-reading characters from the outside, it decodes them to an internal
-format, and likewise encodes them when writing.  @code{Ibyte} (in fact
-@code{unsigned char}) is the basic unit of XEmacs internal buffers and
-strings format.  An @code{Ibyte *} is the type that points at text
-encoded in the variable-width internal encoding.
-
-One character can correspond to one or more @code{Ibyte}s.  In the
-current Mule implementation, an ASCII character is represented by the
-same @code{Ibyte}, and other characters are represented by a sequence
-of two or more @code{Ibyte}s.  (This will also be true of an
-implementation using UTF-8 as the internal encoding.  In fact, only code
-that implements character code conversions and a very few macros used to
-implement motion by whole characters will notice the difference between
-UTF-8 and the Mule encoding.)
-
-Without Mule support, there are exactly 256 characters, implicitly
-Latin-1, and each character is represented using one @code{Ibyte}, and
-there is a one-to-one correspondence between @code{Ibyte}s and
-@code{Ichar}s.
-
-@item Charxpos
-@item Charbpos
-@itemx Charcount
-@cindex Charxpos
-@cindex Charbpos
-@cindex Charcount
-A @code{Charbpos} represents a character position in a buffer.  A
-@code{Charcount} represents a number (count) of characters.  Logically,
-subtracting two @code{Charbpos} values yields a @code{Charcount} value.
-When representing a character position in a string, we just use
-@code{Charcount} directly.  The reason for having a separate typedef for
-buffer positions is that they are 1-based, whereas string positions are
-0-based and hence string counts and positions can be freely intermixed (a
-string position is equivalent to the count of characters from the
-beginning).  When representing a character position that could be either
-in a buffer or string (for example, in the extent code), @code{Charxpos}
-is used.  Although all of these are @code{typedef}ed to
-@code{EMACS_INT}, we use them in preference to @code{EMACS_INT} to make
-it clear what sort of position is being used.
-
-@code{Charxpos}, @code{Charbpos} and @code{Charcount} values are the
-only ones that are ever visible to Lisp.
-
-@item Bytexpos
-@itemx Bytecount
-@cindex Bytebpos
-@cindex Bytecount
-A @code{Bytebpos} represents a byte position in a buffer.  A
-@code{Bytecount} represents the distance between two positions, in
-bytes.  Byte positions in strings use @code{Bytecount}, and for byte
-positions that can be either in a buffer or string, @code{Bytexpos} is
-used.  The relationship between @code{Bytexpos}, @code{Bytebpos} and
-@code{Bytecount} is the same as the relationship between
-@code{Charxpos}, @code{Charbpos} and @code{Charcount}.
-
-@item Extbyte
-@cindex Extbyte
-When dealing with the outside world, XEmacs works with @code{Extbyte}s,
-which are equivalent to @code{char}.  The distance between two
-@code{Extbyte}s is a @code{Bytecount}, since external text is a
-byte-by-byte encoding.  Extbytes occur mainly at the transition point
-between internal text and external functions.  XEmacs code should not,
-if it can possibly avoid it, do any actual manipulation using external
-text, since its format is completely unpredictable (it might not even be
-ASCII-compatible).
-@end table
-
-@node Working With Character and Byte Positions
-@subsection Working With Character and Byte Positions
-@cindex character and byte positions, working with
-@cindex byte positions, working with character and
-@cindex positions, working with character and byte
-
-Now that we have defined the basic character-related types, we can look
-at the macros and functions designed for work with them and for
-conversion between them.  Most of these macros are defined in
-@file{buffer.h}, and we don't discuss all of them here, but only the
-most important ones.  Examining the existing code is the best way to
-learn about them.
-
-@table @code
-@item MAX_ICHAR_LEN
-@cindex MAX_ICHAR_LEN
-This preprocessor constant is the maximum number of buffer bytes to
-represent an Emacs character in the variable width internal encoding.
-It is useful when allocating temporary strings to keep a known number of
-characters.  For instance:
-
-@example
-@group
-@{
-  Charcount cclen;
-  ...
-  @{
-    /* Allocate place for @var{cclen} characters. */
-    Ibyte *buf = (Ibyte *) alloca (cclen * MAX_ICHAR_LEN);
-...
-@end group
-@end example
-
-If you followed the previous section, you can guess that, logically,
-multiplying a @code{Charcount} value with @code{MAX_ICHAR_LEN} produces
-a @code{Bytecount} value.
-
-In the current Mule implementation, @code{MAX_ICHAR_LEN} equals 4.
-Without Mule, it is 1.  In a mature Unicode-based XEmacs, it will also
-be 4 (since all Unicode characters can be encoded in UTF-8 in 4 bytes or
-less), but some versions may use up to 6, in order to use the large
-private space provided by ISO 10646 to ``mirror'' the Mule code space.
-
-@item itext_ichar
-@itemx set_itext_ichar
-@cindex itext_ichar
-@cindex set_itext_ichar
-The @code{itext_ichar} macro takes a @code{Ibyte} pointer and
-returns the @code{Ichar} stored at that position.  If it were a
-function, its prototype would be:
-
-@example
-Ichar itext_ichar (Ibyte *p);
-@end example
-
-@code{set_itext_ichar} stores an @code{Ichar} to the specified byte
-position.  It returns the number of bytes stored:
-
-@example
-Bytecount set_itext_ichar (Ibyte *p, Ichar c);
-@end example
-
-It is important to note that @code{set_itext_ichar} is safe only for
-appending a character at the end of a buffer, not for overwriting a
-character in the middle.  This is because the width of characters
-varies, and @code{set_itext_ichar} cannot resize the string if it
-writes, say, a two-byte character where a single-byte character used to
-reside.
-
-A typical use of @code{set_itext_ichar} can be demonstrated by this
-example, which copies characters from buffer @var{buf} to a temporary
-string of Ibytes.
-
-@example
-@group
-@{
-  Charbpos pos;
-  for (pos = beg; pos < end; pos++)
-    @{
-      Ichar c = BUF_FETCH_CHAR (buf, pos);
-      p += set_itext_ichar (buf, c);
-    @}
-@}
-@end group
-@end example
-
-Note how @code{set_itext_ichar} is used to store the @code{Ichar}
-and increment the counter, at the same time.
-
-@item INC_IBYTEPTR
-@itemx DEC_IBYTEPTR
-@cindex INC_IBYTEPTR
-@cindex DEC_IBYTEPTR
-These two macros increment and decrement an @code{Ibyte} pointer,
-respectively.  They will adjust the pointer by the appropriate number of
-bytes according to the byte length of the character stored there.  Both
-macros assume that the memory address is located at the beginning of a
-valid character.
-
-Without Mule support, @code{INC_IBYTEPTR (p)} and @code{DEC_IBYTEPTR (p)}
-simply expand to @code{p++} and @code{p--}, respectively.
-
-@item bytecount_to_charcount
-@cindex bytecount_to_charcount
-Given a pointer to a text string and a length in bytes, return the
-equivalent length in characters.
-
-@example
-Charcount bytecount_to_charcount (Ibyte *p, Bytecount bc);
-@end example
-
-@item charcount_to_bytecount
-@cindex charcount_to_bytecount
-Given a pointer to a text string and a length in characters, return the
-equivalent length in bytes.
-
-@example
-Bytecount charcount_to_bytecount (Ibyte *p, Charcount cc);
-@end example
-
-@item itext_n_addr
-@cindex itext_n_addr
-Return a pointer to the beginning of the character offset @var{cc} (in
-characters) from @var{p}.
-
-@example
-Ibyte *itext_n_addr (Ibyte *p, Charcount cc);
-@end example
-@end table
-
-@node Conversion to and from External Data
-@subsection Conversion to and from External Data
-@cindex conversion to and from external data
-@cindex external data, conversion to and from
-
-When an external function, such as a C library function, returns a
-@code{char} pointer, you should almost never treat it as @code{Ibyte}.
-This is because these returned strings may contain 8bit characters which
-can be misinterpreted by XEmacs, and cause a crash.  Likewise, when
-exporting a piece of internal text to the outside world, you should
-always convert it to an appropriate external encoding, lest the internal
-stuff (such as the infamous \201 characters) leak out.
-
-The interface to conversion between the internal and external
-representations of text are the numerous conversion macros defined in
-@file{buffer.h}.  There used to be a fixed set of external formats
-supported by these macros, but now any coding system can be used with
-them.  The coding system alias mechanism is used to create the
-following logical coding systems, which replace the fixed external
-formats.  The (dontusethis-set-symbol-value-handler) mechanism was
-enhanced to make this possible (more work on that is needed).
-
-Often useful coding systems:
-
-@table @code
-@item Qbinary
-This is the simplest format and is what we use in the absence of a more
-appropriate format.  This converts according to the @code{binary} coding
-system:
-
-@enumerate a
-@item
-On input, bytes 0--255 are converted into (implicitly Latin-1)
-characters 0--255.  A non-Mule xemacs doesn't really know about
-different character sets and the fonts to display them, so the bytes can
-be treated as text in different 1-byte encodings by simply setting the
-appropriate fonts.  So in a sense, non-Mule xemacs is a multi-lingual
-editor if, for example, different fonts are used to display text in
-different buffers, faces, or windows.  The specifier mechanism gives the
-user complete control over this kind of behavior.
-@item
-On output, characters 0--255 are converted into bytes 0--255 and other
-characters are converted into `~'.
-@end enumerate
-
-@item Qnative
-Format used for the external Unix environment---@code{argv[]}, stuff
-from @code{getenv()}, stuff from the @file{/etc/passwd} file, etc.
-This is encoded according to the encoding specified by the current locale.
-[[This is dangerous; current locale is user preference, and the system
-is probably going to be something else.  Is there anything we can do
-about it?]]
-
-@item Qfile_name
-Format used for filenames.  This is normally the same as @code{Qnative},
-but the two should be distinguished for clarity and possible future
-separation -- and also because @code{Qfile_name} can be changed using either
-the @code{file-name-coding-system} or @code{pathname-coding-system} (now
-obsolete) variables.
-
-@item Qctext
-Compound-text format.  This is the standard X11 format used for data
-stored in properties, selections, and the like.  This is an 8-bit
-no-lock-shift ISO2022 coding system.  This is a real coding system,
-unlike @code{Qfile_name}, which is user-definable.
-
-@item Qmswindows_tstr
-Used for external data in all MS Windows functions that are declared to
-accept data of type @code{LPTSTR} or @code{LPCSTR}.  This maps to either
-@code{Qmswindows_multibyte} (a locale-specific encoding, same as
-@code{Qnative}) or @code{Qmswindows_unicode}, depending on whether
-XEmacs is being run under Windows 9X or Windows NT/2000/XP.
-@end table
-
-Many other coding systems are provided by default.
-
-There are two fundamental macros to convert between external and
-internal format, as well as various convenience macros to simplify the
-most common operations.
-
-@code{TO_INTERNAL_FORMAT} converts external data to internal format, and
-@code{TO_EXTERNAL_FORMAT} converts the other way around.  The arguments
-each of these receives are a source type, a source, a sink type, a sink,
-and a coding system (or a symbol naming a coding system).
-
-A typical call looks like
-@example
-TO_EXTERNAL_FORMAT (LISP_STRING, str, C_STRING_MALLOC, ptr, Qfile_name);
-@end example
-
-which means that the contents of the lisp string @code{str} are written
-to a malloc'ed memory area which will be pointed to by @code{ptr}, after
-the function returns.  The conversion will be done using the
-@code{file-name} coding system, which will be controlled by the user
-indirectly by setting or binding the variable
-@code{file-name-coding-system}.
-
-Some sources and sinks require two C variables to specify.  We use some
-preprocessor magic to allow different source and sink types, and even
-different numbers of arguments to specify different types of sources and
-sinks.
-
-So we can have a call that looks like
-@example
-TO_INTERNAL_FORMAT (DATA, (ptr, len),
-                    MALLOC, (ptr, len),
-                    coding_system);
-@end example
-
-The parenthesized argument pairs are required to make the preprocessor
-magic work.
-
-Here are the different source and sink types:
-
-@table @code
-@item @code{DATA, (ptr, len),}
-input data is a fixed buffer of size @var{len} at address @var{ptr}
-@item @code{ALLOCA, (ptr, len),}
-output data is placed in an alloca()ed buffer of size @var{len} pointed to by @var{ptr}
-@item @code{MALLOC, (ptr, len),}
-output data is in a malloc()ed buffer of size @var{len} pointed to by @var{ptr}
-@item @code{C_STRING_ALLOCA, ptr,}
-equivalent to @code{ALLOCA (ptr, len_ignored)} on output.
-@item @code{C_STRING_MALLOC, ptr,}
-equivalent to @code{MALLOC (ptr, len_ignored)} on output
-@item @code{C_STRING, ptr,}
-equivalent to @code{DATA, (ptr, strlen/wcslen (ptr))} on input
-@item @code{LISP_STRING, string,}
-input or output is a Lisp_Object of type string
-@item @code{LISP_BUFFER, buffer,}
-output is written to @code{(point)} in lisp buffer @var{buffer}
-@item @code{LISP_LSTREAM, lstream,}
-input or output is a Lisp_Object of type lstream
-@item @code{LISP_OPAQUE, object,}
-input or output is a Lisp_Object of type opaque
-@end table
-
-A source type of @code{C_STRING} or a sink type of
-@code{C_STRING_ALLOCA} or @code{C_STRING_MALLOC} is appropriate where
-the external API is not '\0'-byte-clean -- i.e. it expects strings to be
-terminated with a null byte.  For external API's that are in fact
-'\0'-byte-clean, we should of course not use these.
-
-The sinks to be specified must be lvalues, unless they are the lisp
-object types @code{LISP_LSTREAM} or @code{LISP_BUFFER}.
-
-There is no problem using the same lvalue for source and sink.
-
-Garbage collection is inhibited during these conversion operations, so
-it is OK to pass in data from Lisp strings using @code{XSTRING_DATA}.
-
-For the sink types @code{ALLOCA} and @code{C_STRING_ALLOCA}, the
-resulting text is stored in a stack-allocated buffer, which is
-automatically freed on returning from the function.  However, the sink
-types @code{MALLOC} and @code{C_STRING_MALLOC} return @code{xmalloc()}ed
-memory.  The caller is responsible for freeing this memory using
-@code{xfree()}.
-
-Note that it doesn't make sense for @code{LISP_STRING} to be a source
-for @code{TO_INTERNAL_FORMAT} or a sink for @code{TO_EXTERNAL_FORMAT}.
-You'll get an assertion failure if you try.
-
-99% of conversions involve raw data or Lisp strings as both source and
-sink, and usually data is output as @code{alloca()}, or sometimes
-@code{xmalloc()}.  For this reason, convenience macros are defined for
-many types of conversions involving raw data and/or Lisp strings,
-especially when the output is an @code{alloca()}ed string. (When the
-destination is a Lisp string, there are other functions that should be
-used instead -- @code{build_ext_string()} and @code{make_ext_string()},
-for example.) The convenience macros are of two types -- the older kind
-that store the result into a specified variable, and the newer kind that
-return the result.  The newer kind of macros don't exist when the output
-is sized data, because that would have two return values.  NOTE: All
-convenience macros are ultimately defined in terms of
-@code{TO_EXTERNAL_FORMAT} and @code{TO_INTERNAL_FORMAT}.  Thus, any
-comments above about the workings of these macros also apply to all
-convenience macros.
-
-A typical old-style convenience macro is
-
-@example
-  C_STRING_TO_EXTERNAL (in, out, codesys);
-@end example
-
-This is equivalent to
-
-@example
-  TO_EXTERNAL_FORMAT (C_STRING, in, C_STRING_ALLOCA, out, codesys);
-@end example
-
-but is easier to write and somewhat clearer, since it clearly identifies
-the arguments without the clutter of having the preprocessor types mixed
-in.
-
-The new-style equivalent is @code{NEW_C_STRING_TO_EXTERNAL (src,
-codesys)}, which @emph{returns} the converted data (still in
-@code{alloca()} space).  This is far more convenient for most
-operations.
-
-@node General Guidelines for Writing Mule-Aware Code
-@subsection General Guidelines for Writing Mule-Aware Code
-@cindex writing Mule-aware code, general guidelines for
-@cindex Mule-aware code, general guidelines for writing
-@cindex code, general guidelines for writing Mule-aware
-
-This section contains some general guidance on how to write Mule-aware
-code, as well as some pitfalls you should avoid.
-
-@table @emph
-@item Never use @code{char} and @code{char *}.
-In XEmacs, the use of @code{char} and @code{char *} is almost always a
-mistake.  If you want to manipulate an Emacs character from ``C'', use
-@code{Ichar}.  If you want to examine a specific octet in the internal
-format, use @code{Ibyte}.  If you want a Lisp-visible character, use a
-@code{Lisp_Object} and @code{make_char}.  If you want a pointer to move
-through the internal text, use @code{Ibyte *}.  Also note that you
-almost certainly do not need @code{Ichar *}.  Other typedefs to clarify
-the use of @code{char} are @code{Char_ASCII}, @code{Char_Binary},
-@code{UChar_Binary}, and @code{CIbyte}.
-
-@item Be careful not to confuse @code{Charcount}, @code{Bytecount}, @code{Charbpos} and @code{Bytebpos}.
-The whole point of using different types is to avoid confusion about the
-use of certain variables.  Lest this effect be nullified, you need to be
-careful about using the right types.
-
-@item Always convert external data
-It is extremely important to always convert external data, because
-XEmacs can crash if unexpected 8-bit sequences are copied to its internal
-buffers literally.
-
-This means that when a system function, such as @code{readdir}, returns
-a string, you normally need to convert it using one of the conversion macros
-described in the previous chapter, before passing it further to Lisp.
-
-Actually, most of the basic system functions that accept '\0'-terminated
-string arguments, like @code{stat()} and @code{open()}, have
-@strong{encapsulated} equivalents that do the internal to external
-conversion themselves.  The encapsulated equivalents have a @code{qxe_}
-prefix and have string arguments of type @code{Ibyte *}, and you can
-pass internally encoded data to them, often from a Lisp string using
-@code{XSTRING_DATA}. (A better design might be to provide versions that
-accept Lisp strings directly.)  [[Really?  Then they'd either take
-@code{Lisp_Object}s and need to check type, or they'd take
-@code{Lisp_String}s, and violate the rules about passing any of the
-specific Lisp types.]]
-
-Also note that many internal functions, such as @code{make_string},
-accept Ibytes, which removes the need for them to convert the data they
-receive.  This increases efficiency because that way external data needs
-to be decoded only once, when it is read.  After that, it is passed
-around in internal format.
-
-@item Do all work in internal format
-External-formatted data is completely unpredictable in its format.  It
-may be fixed-width Unicode (not even ASCII compatible); it may be a
-modal encoding, in
-which case some occurrences of (e.g.) the slash character may be part of
-two-byte Asian-language characters, and a naive attempt to split apart a
-pathname by slashes will fail; etc.  Internal-format text should be
-converted to external format only at the point where an external API is
-actually called, and the first thing done after receiving
-external-format text from an external API should be to convert it to
-internal text.
-@end table
-
-@node An Example of Mule-Aware Code
-@subsection An Example of Mule-Aware Code
-@cindex code, an example of Mule-aware
-@cindex Mule-aware code, an example of
-
-As an example of Mule-aware code, we will analyze the @code{string}
-function, which conses up a Lisp string from the character arguments it
-receives.  Here is the definition, pasted from @code{alloc.c}:
-
-@example
-@group
-DEFUN ("string", Fstring, 0, MANY, 0, /*
-Concatenate all the argument characters and make the result a string.
-*/
-       (int nargs, Lisp_Object *args))
-@{
-  Ibyte *storage = alloca_array (Ibyte, nargs * MAX_ICHAR_LEN);
-  Ibyte *p = storage;
-
-  for (; nargs; nargs--, args++)
-    @{
-      Lisp_Object lisp_char = *args;
-      CHECK_CHAR_COERCE_INT (lisp_char);
-      p += set_itext_ichar (p, XCHAR (lisp_char));
-    @}
-  return make_string (storage, p - storage);
-@}
-@end group
-@end example
-
-Now we can analyze the source line by line.
-
-Obviously, string will be as long as there are arguments to the
-function.  This is why we allocate @code{MAX_ICHAR_LEN} * @var{nargs}
-bytes on the stack, i.e. the worst-case number of bytes for @var{nargs}
-@code{Ichar}s to fit in the string.
-
-Then, the loop checks that each element is a character, converting
-integers in the process.  Like many other functions in XEmacs, this
-function silently accepts integers where characters are expected, for
-historical and compatibility reasons.  Unless you know what you are
-doing, @code{CHECK_CHAR} will also suffice.  @code{XCHAR (lisp_char)}
-extracts the @code{Ichar} from the @code{Lisp_Object}, and
-@code{set_itext_ichar} stores it to storage, increasing @code{p} in
-the process.
-
-Other instructive examples of correct coding under Mule can be found all
-over the XEmacs code.  For starters, I recommend
-@code{Fnormalize_menu_item_name} in @file{menubar.c}.  After you have
-understood this section of the manual and studied the examples, you can
-proceed writing new Mule-aware code.
-
-@node Mule-izing Code
-@subsection Mule-izing Code
-
-A lot of code is written without Mule in mind, and needs to be made
-Mule-correct or "Mule-ized".  There is really no substitute for
-line-by-line analysis when doing this, but the following checklist can
-help:
-
-@itemize @bullet
-@item
-Check all uses of @code{XSTRING_DATA}.
-@item
-Check all uses of @code{build_string} and @code{make_string}.
-@item
-Check all uses of @code{tolower} and @code{toupper}.
-@item
-Check object print methods.
-@item
-Check for use of functions such as @code{write_c_string},
-@code{write_fmt_string}, @code{stderr_out}, @code{stdout_out}.
-@item
-Check all occurrences of @code{char} and correct to one of the other
-typedefs described above.
-@item
-Check all existing uses of @code{TO_EXTERNAL_FORMAT},
-@code{TO_INTERNAL_FORMAT}, and any convenience macros (grep for
-@samp{EXTERNAL_TO}, @samp{TO_EXTERNAL}, and @samp{TO_SIZED_EXTERNAL}).
-@item
-In Windows code, string literals may need to be encapsulated with @code{XETEXT}.
-@end itemize
-
-@node Techniques for XEmacs Developers
+@node Techniques for XEmacs Developers,  , Proper Use of Unsigned Types, Rules When Writing New C Code
 @section Techniques for XEmacs Developers
 @cindex techniques for XEmacs developers
 @cindex developers, techniques for XEmacs
@@ -3713,7 +3648,7 @@
 
 This macro evaluates its argument twice, and also fails if used like this:
 @example
-  if (flag) MARK_OBJECT (obj); else do_something();
+  if (flag) MARK_OBJECT (obj); else @code{do_something()};
 @end example
 
 A much better definition is
@@ -3866,6 +3801,17 @@
 @chapter Regression Testing XEmacs
 @cindex testing, regression
 
+@menu
+* How to Regression-Test::      
+* Modules for Regression Testing::  
+@end menu
+
+@node How to Regression-Test, Modules for Regression Testing, Regression Testing XEmacs, Regression Testing XEmacs
+@section How to Regression-Test
+@cindex how to regression-test
+@cindex regression-test, how to
+@cindex testing, regression, how to
+
 The source directory @file{tests/automated} contains XEmacs' automated
 test suite.  The usual way of running all the tests is running
 @code{make check} from the top-level build directory.
@@ -4086,16 +4032,46 @@
 is broken in a way that we weren't trying to test!)
 @end enumerate
 
-
-@node CVS Techniques, A Summary of the Various XEmacs Modules, Regression Testing XEmacs, Top
+@node Modules for Regression Testing,  , How to Regression-Test, Regression Testing XEmacs
+@section Modules for Regression Testing
+@cindex modules for regression testing
+@cindex regression testing, modules for
+
+@example
+@file{test-harness.el}
+@file{base64-tests.el}
+@file{byte-compiler-tests.el}
+@file{case-tests.el}
+@file{ccl-tests.el}
+@file{c-tests.el}
+@file{database-tests.el}
+@file{extent-tests.el}
+@file{hash-table-tests.el}
+@file{lisp-tests.el}
+@file{md5-tests.el}
+@file{mule-tests.el}
+@file{regexp-tests.el}
+@file{symbol-tests.el}
+@file{syntax-tests.el}
+@file{tag-tests.el}
+@file{weak-tests.el}
+@end example
+
+@file{test-harness.el} defines the macros @code{Assert},
+@code{Check-Error}, @code{Check-Error-Message}, and
+@code{Check-Message}.  The other files are test files, testing various
+XEmacs facilities.  @xref{Regression Testing XEmacs}.
+
+
+@node CVS Techniques, The Modules of XEmacs, Regression Testing XEmacs, Top
 @chapter CVS Techniques
 @cindex CVS techniques
 
 @menu
-* Merging a Branch into the Trunk::
+* Merging a Branch into the Trunk::  
 @end menu
 
-@node Merging a Branch into the Trunk
+@node Merging a Branch into the Trunk,  , CVS Techniques, CVS Techniques
 @section Merging a Branch into the Trunk
 @cindex merging a branch into the trunk
 
@@ -4177,35 +4153,515 @@
 @end enumerate
 
 
-@node A Summary of the Various XEmacs Modules, Allocation of Objects in XEmacs Lisp, CVS Techniques, Top
-@chapter A Summary of the Various XEmacs Modules
-@cindex modules, a summary of the various XEmacs
-
-  This is accurate as of XEmacs 20.0.
+@node The Modules of XEmacs, Allocation of Objects in XEmacs Lisp, CVS Techniques, Top
+@chapter The Modules of XEmacs
+@cindex modules of XEmacs
 
 @menu
-* Low-Level Modules::
-* Basic Lisp Modules::
-* Modules for Standard Editing Operations::
-* Editor-Level Control Flow Modules::
-* Modules for the Basic Displayable Lisp Objects::
-* Modules for other Display-Related Lisp Objects::
-* Modules for the Redisplay Mechanism::
-* Modules for Interfacing with the File System::
-* Modules for Other Aspects of the Lisp Interpreter and Object System::
-* Modules for Interfacing with the Operating System::
-* Modules for Interfacing with X Windows::
-* Modules for Internationalization::
-* Modules for Regression Testing::
+* A Summary of the Various XEmacs Modules::  
+* Low-Level Modules::           
+* Basic Lisp Modules::          
+* Modules for Standard Editing Operations::  
+* Modules for Interfacing with the File System::  
+* Modules for Other Aspects of the Lisp Interpreter and Object System::  
+* Modules for Interfacing with the Operating System::  
 @end menu
 
-@node Low-Level Modules
+@node A Summary of the Various XEmacs Modules, Low-Level Modules, The Modules of XEmacs, The Modules of XEmacs
+@section A Summary of the Various XEmacs Modules
+@cindex summary of the various XEmacs modules
+@cindex modules, summary of the various XEmacs
+
+The following is a list of the sections describing the various modules
+(i.e. files) that implement XEmacs.  Some of them are in this chapter;
+some of them are attached to the chapters describing the modules in
+question.
+
+@itemize @bullet
+@item
+@ref{Low-Level Modules}.
+@item
+@ref{Basic Lisp Modules}.
+@item
+@ref{Modules for Standard Editing Operations}.
+@item
+@ref{Editor-Level Control Flow Modules}.
+@item
+@ref{Modules for the Basic Displayable Lisp Objects}.
+@item
+@ref{Modules for other Display-Related Lisp Objects}.
+@item
+@ref{Modules for the Redisplay Mechanism}.
+@item
+@ref{Modules for Interfacing with the File System}.
+@item
+@ref{Modules for Other Aspects of the Lisp Interpreter and Object System}.
+@item
+@ref{Modules for Interfacing with the Operating System}.
+@item
+@ref{Modules for Interfacing with MS Windows}.
+@item
+@ref{Modules for Interfacing with X Windows}.
+@item
+@ref{Modules for Internationalization}.
+@item
+@ref{Modules for Regression Testing}.
+@end itemize
+
+The following table contains cross-references from each module in XEmacs
+21.5 to the section (if any) describing it.
+
+@multitable {@file{intl-auto-encap-win32.c}} {@ref{Modules for Other Aspects of the Lisp Interpreter and Object System}}
+@item @file{Emacs.ad.h} @tab @ref{Modules for Interfacing with X Windows}.
+@item @file{EmacsFrame.c} @tab @ref{Modules for Interfacing with X Windows}.
+@item @file{EmacsFrame.h} @tab @ref{Modules for Interfacing with X Windows}.
+@item @file{EmacsFrameP.h} @tab @ref{Modules for Interfacing with X Windows}.
+@item @file{EmacsManager.c} @tab @ref{Modules for Interfacing with X Windows}.
+@item @file{EmacsManager.h} @tab @ref{Modules for Interfacing with X Windows}.
+@item @file{EmacsManagerP.h} @tab @ref{Modules for Interfacing with X Windows}.
+@item @file{EmacsShell-sub.c} @tab @ref{Modules for Interfacing with X Windows}.
+@item @file{EmacsShell.c} @tab @ref{Modules for Interfacing with X Windows}.
+@item @file{EmacsShell.h} @tab @ref{Modules for Interfacing with X Windows}.
+@item @file{EmacsShellP.h} @tab @ref{Modules for Interfacing with X Windows}.
+@item @file{ExternalClient-Xlib.c} @tab @ref{Modules for Interfacing with X Windows}.
+@item @file{ExternalClient.c} @tab @ref{Modules for Interfacing with X Windows}.
+@item @file{ExternalClient.h} @tab @ref{Modules for Interfacing with X Windows}.
+@item @file{ExternalClientP.h} @tab @ref{Modules for Interfacing with X Windows}.
+@item @file{ExternalShell.c} @tab @ref{Modules for Interfacing with X Windows}.
+@item @file{ExternalShell.h} @tab @ref{Modules for Interfacing with X Windows}.
+@item @file{ExternalShellP.h} @tab @ref{Modules for Interfacing with X Windows}.
+@item @file{Makefile.in.in} @tab
+@item @file{abbrev.c} @tab @ref{Modules for Other Aspects of the Lisp Interpreter and Object System}.
+@item @file{alloc.c} @tab @ref{Basic Lisp Modules}.
+@item @file{alloca.c} @tab @ref{Low-Level Modules}.
+@item @file{alloca.s} @tab
+@item @file{backtrace.h} @tab @ref{Basic Lisp Modules}.
+@item @file{balloon-x.c} @tab
+@item @file{balloon_help.c} @tab
+@item @file{balloon_help.h} @tab
+@item @file{base64-tests.el} @tab @ref{Modules for Regression Testing}.
+@item @file{bitmaps.h} @tab @ref{Modules for other Display-Related Lisp Objects}.
+@item @file{blocktype.c} @tab @ref{Low-Level Modules}.
+@item @file{blocktype.h} @tab @ref{Low-Level Modules}.
+@item @file{broken-sun.h} @tab @ref{Modules for Interfacing with the Operating System}.
+@item @file{buffer.c} @tab @ref{Modules for Standard Editing Operations}.
+@item @file{buffer.h} @tab @ref{Modules for Standard Editing Operations}.
+@item @file{bufslots.h} @tab @ref{Modules for Standard Editing Operations}.
+@item @file{byte-compiler-tests.el} @tab @ref{Modules for Regression Testing}.
+@item @file{bytecode.c} @tab @ref{Basic Lisp Modules}.
+@item @file{bytecode.h} @tab @ref{Basic Lisp Modules}.
+@item @file{c-tests.el} @tab @ref{Modules for Regression Testing}.
+@item @file{callint.c} @tab @ref{Modules for Standard Editing Operations}.
+@item @file{case-tests.el} @tab @ref{Modules for Regression Testing}.
+@item @file{casefiddle.c} @tab @ref{Modules for Other Aspects of the Lisp Interpreter and Object System}.
+@item @file{casetab.c} @tab @ref{Modules for Other Aspects of the Lisp Interpreter and Object System}.
+@item @file{casetab.h} @tab
+@item @file{ccl-tests.el} @tab @ref{Modules for Regression Testing}.
+@item @file{charset.h} @tab
+@item @file{chartab.c} @tab @ref{Modules for Other Aspects of the Lisp Interpreter and Object System}.
+@item @file{chartab.h} @tab @ref{Modules for Other Aspects of the Lisp Interpreter and Object System}.
+@item @file{cm.c} @tab @ref{Modules for the Redisplay Mechanism}.
+@item @file{cm.h} @tab @ref{Modules for the Redisplay Mechanism}.
+@item @file{cmdloop.c} @tab @ref{Editor-Level Control Flow Modules}.
+@item @file{cmds.c} @tab @ref{Modules for Standard Editing Operations}.
+@item @file{coding-system-slots.h} @tab
+@item @file{commands.h} @tab @ref{Modules for Standard Editing Operations}.
+@item @file{compiler.h} @tab
+@item @file{config.h.in} @tab
+@item @file{config.h} @tab @ref{Low-Level Modules}.
+@item @file{conslots.h} @tab
+@item @file{console-gtk-impl.h} @tab
+@item @file{console-gtk.c} @tab
+@item @file{console-gtk.h} @tab
+@item @file{console-impl.h} @tab
+@item @file{console-msw-impl.h} @tab
+@item @file{console-msw.c} @tab @ref{Modules for the Basic Displayable Lisp Objects}.
+@item @file{console-msw.h} @tab @ref{Modules for the Basic Displayable Lisp Objects}.
+@item @file{console-stream-impl.h} @tab
+@item @file{console-stream.c} @tab @ref{Modules for the Basic Displayable Lisp Objects}.
+@item @file{console-stream.h} @tab @ref{Modules for the Basic Displayable Lisp Objects}.
+@item @file{console-tty-impl.h} @tab
+@item @file{console-tty.c} @tab @ref{Modules for the Basic Displayable Lisp Objects}.
+@item @file{console-tty.h} @tab @ref{Modules for the Basic Displayable Lisp Objects}.
+@item @file{console-x-impl.h} @tab
+@item @file{console-x.c} @tab @ref{Modules for the Basic Displayable Lisp Objects}.
+@item @file{console-x.h} @tab @ref{Modules for the Basic Displayable Lisp Objects}.
+@item @file{console.c} @tab @ref{Modules for the Basic Displayable Lisp Objects}.
+@item @file{console.h} @tab @ref{Modules for the Basic Displayable Lisp Objects}.
+@item @file{data.c} @tab @ref{Basic Lisp Modules}.
+@item @file{database-tests.el} @tab @ref{Modules for Regression Testing}.
+@item @file{database.c} @tab
+@item @file{database.h} @tab
+@item @file{debug.c} @tab @ref{Low-Level Modules}.
+@item @file{debug.h} @tab @ref{Low-Level Modules}.
+@item @file{depend} @tab
+@item @file{device-gtk.c} @tab
+@item @file{device-impl.h} @tab
+@item @file{device-msw.c} @tab @ref{Modules for the Basic Displayable Lisp Objects}.
+@item @file{device-tty.c} @tab @ref{Modules for the Basic Displayable Lisp Objects}.
+@item @file{device-x.c} @tab @ref{Modules for the Basic Displayable Lisp Objects}.
+@item @file{device.c} @tab @ref{Modules for the Basic Displayable Lisp Objects}.
+@item @file{device.h} @tab @ref{Modules for the Basic Displayable Lisp Objects}.
+@item @file{devslots.h} @tab
+@item @file{dgif_lib.c} @tab @ref{Modules for other Display-Related Lisp Objects}.
+@item @file{dialog-gtk.c} @tab
+@item @file{dialog-msw.c} @tab
+@item @file{dialog-x.c} @tab
+@item @file{dialog.c} @tab
+@item @file{dired-msw.c} @tab
+@item @file{dired.c} @tab @ref{Modules for Interfacing with the File System}.
+@item @file{doc.c} @tab @ref{Modules for Other Aspects of the Lisp Interpreter and Object System}.
+@item @file{doprnt.c} @tab @ref{Modules for Standard Editing Operations}.
+@item @file{dragdrop.c} @tab
+@item @file{dragdrop.h} @tab
+@item @file{dump-data.c} @tab
+@item @file{dump-data.h} @tab
+@item @file{dump-id.c} @tab
+@item @file{dumper.c} @tab
+@item @file{dumper.h} @tab
+@item @file{dynarr.c} @tab @ref{Low-Level Modules}.
+@item @file{ecrt0.c} @tab @ref{Low-Level Modules}.
+@item @file{editfns.c} @tab @ref{Modules for Standard Editing Operations}.
+@item @file{elhash.c} @tab @ref{Modules for Other Aspects of the Lisp Interpreter and Object System}.
+@item @file{elhash.h} @tab @ref{Modules for Other Aspects of the Lisp Interpreter and Object System}.
+@item @file{emacs-marshals.c} @tab
+@item @file{emacs-new.c.old} @tab
+@item @file{emacs-widget-accessors.c} @tab
+@item @file{emacs.c} @tab @ref{Low-Level Modules}.
+@item @file{emodules.c} @tab
+@item @file{emodules.h} @tab
+@item @file{esd.c} @tab
+@item @file{eval.c} @tab @ref{Basic Lisp Modules}.
+@item @file{event-Xt.c} @tab @ref{Editor-Level Control Flow Modules}.
+@item @file{event-gtk.c} @tab
+@item @file{event-gtk.h} @tab
+@item @file{event-msw.c} @tab @ref{Editor-Level Control Flow Modules}.
+@item @file{event-stream.c} @tab @ref{Editor-Level Control Flow Modules}.
+@item @file{event-tty.c} @tab @ref{Editor-Level Control Flow Modules}.
+@item @file{event-unixoid.c} @tab
+@item @file{event-xlike-inc.c} @tab
+@item @file{events-mod.h} @tab @ref{Editor-Level Control Flow Modules}.
+@item @file{events.c} @tab @ref{Editor-Level Control Flow Modules}.
+@item @file{events.h} @tab @ref{Editor-Level Control Flow Modules}.
+@item @file{extent-tests.el} @tab @ref{Modules for Regression Testing}.
+@item @file{extents-impl.h} @tab
+@item @file{extents.c} @tab @ref{Modules for Standard Editing Operations}.
+@item @file{extents.h} @tab @ref{Modules for Standard Editing Operations}.
+@item @file{extw-Xlib.c} @tab @ref{Modules for Interfacing with X Windows}.
+@item @file{extw-Xlib.h} @tab @ref{Modules for Interfacing with X Windows}.
+@item @file{extw-Xt.c} @tab @ref{Modules for Interfacing with X Windows}.
+@item @file{extw-Xt.h} @tab @ref{Modules for Interfacing with X Windows}.
+@item @file{faces.c} @tab @ref{Modules for other Display-Related Lisp Objects}.
+@item @file{faces.h} @tab @ref{Modules for other Display-Related Lisp Objects}.
+@item @file{file-coding.c} @tab @ref{Modules for Internationalization}.
+@item @file{file-coding.h} @tab @ref{Modules for Internationalization}.
+@item @file{fileio.c} @tab @ref{Modules for Interfacing with the File System}.
+@item @file{filelock.c} @tab @ref{Modules for Interfacing with the File System}.
+@item @file{filemode.c} @tab @ref{Modules for Interfacing with the File System}.
+@item @file{floatfns.c} @tab @ref{Basic Lisp Modules}.
+@item @file{fns.c} @tab @ref{Basic Lisp Modules}.
+@item @file{font-lock.c} @tab @ref{Modules for other Display-Related Lisp Objects}.
+@item @file{frame-gtk.c} @tab
+@item @file{frame-impl.h} @tab
+@item @file{frame-msw.c} @tab @ref{Modules for the Basic Displayable Lisp Objects}.
+@item @file{frame-tty.c} @tab @ref{Modules for the Basic Displayable Lisp Objects}.
+@item @file{frame-x.c} @tab @ref{Modules for the Basic Displayable Lisp Objects}.
+@item @file{frame.c} @tab @ref{Modules for the Basic Displayable Lisp Objects}.
+@item @file{frame.diff} @tab
+@item @file{frame.h} @tab @ref{Modules for the Basic Displayable Lisp Objects}.
+@item @file{frameslots.h} @tab
+@item @file{free-hook.c} @tab @ref{Low-Level Modules}.
+@item @file{gccache-gtk.c} @tab
+@item @file{gccache-gtk.h} @tab
+@item @file{general-slots.h} @tab
+@item @file{general.c} @tab @ref{Basic Lisp Modules}.
+@item @file{getloadavg.c} @tab @ref{Modules for Interfacing with the Operating System}.
+@item @file{getpagesize.h} @tab @ref{Low-Level Modules}.
+@item @file{gif_err.c} @tab @ref{Modules for other Display-Related Lisp Objects}.
+@item @file{gif_io.c} @tab
+@item @file{gif_lib.h} @tab @ref{Modules for other Display-Related Lisp Objects}.
+@item @file{gifalloc.c} @tab @ref{Modules for other Display-Related Lisp Objects}.
+@item @file{gifrlib.h} @tab
+@item @file{glade.c} @tab
+@item @file{glyphs-eimage.c} @tab @ref{Modules for other Display-Related Lisp Objects}.
+@item @file{glyphs-gtk.c} @tab
+@item @file{glyphs-gtk.h} @tab
+@item @file{glyphs-msw.c} @tab @ref{Modules for other Display-Related Lisp Objects}.
+@item @file{glyphs-msw.h} @tab @ref{Modules for other Display-Related Lisp Objects}.
+@item @file{glyphs-shared.c} @tab
+@item @file{glyphs-widget.c} @tab @ref{Modules for other Display-Related Lisp Objects}.
+@item @file{glyphs-x.c} @tab @ref{Modules for other Display-Related Lisp Objects}.
+@item @file{glyphs-x.h} @tab @ref{Modules for other Display-Related Lisp Objects}.
+@item @file{glyphs.c} @tab @ref{Modules for other Display-Related Lisp Objects}.
+@item @file{glyphs.h} @tab @ref{Modules for other Display-Related Lisp Objects}.
+@item @file{gmalloc.c} @tab @ref{Low-Level Modules}.
+@item @file{gpmevent.c} @tab @ref{Editor-Level Control Flow Modules}.
+@item @file{gpmevent.h} @tab @ref{Editor-Level Control Flow Modules}.
+@item @file{gtk-glue.c} @tab
+@item @file{gtk-xemacs.c} @tab
+@item @file{gtk-xemacs.h} @tab
+@item @file{gui-gtk.c} @tab
+@item @file{gui-msw.c} @tab
+@item @file{gui-x.c} @tab
+@item @file{gui.c} @tab
+@item @file{gui.h} @tab
+@item @file{gutter.c} @tab
+@item @file{gutter.h} @tab
+@item @file{hash-table-tests.el} @tab @ref{Modules for Regression Testing}.
+@item @file{hash.c} @tab @ref{Modules for Other Aspects of the Lisp Interpreter and Object System}.
+@item @file{hash.h} @tab @ref{Modules for Other Aspects of the Lisp Interpreter and Object System}.
+@item @file{hftctl.c} @tab @ref{Modules for Interfacing with the Operating System}.
+@item @file{hpplay.c} @tab @ref{Modules for Interfacing with the Operating System}.
+@item @file{imgproc.c} @tab
+@item @file{imgproc.h} @tab
+@item @file{indent.c} @tab @ref{Modules for the Redisplay Mechanism}.
+@item @file{inline.c} @tab @ref{Low-Level Modules}.
+@item @file{input-method-motif.c} @tab
+@item @file{input-method-xlib.c} @tab
+@item @file{insdel.c} @tab @ref{Modules for Standard Editing Operations}.
+@item @file{insdel.h} @tab @ref{Modules for Standard Editing Operations}.
+@item @file{intl-auto-encap-win32.c} @tab
+@item @file{intl-auto-encap-win32.h} @tab
+@item @file{intl-encap-win32.c} @tab
+@item @file{intl-win32.c} @tab
+@item @file{intl-x.c} @tab
+@item @file{intl.c} @tab @ref{Modules for Internationalization}.
+@item @file{iso-wide.h} @tab @ref{Modules for Internationalization}.
+@item @file{keymap.c} @tab @ref{Editor-Level Control Flow Modules}.
+@item @file{keymap.h} @tab @ref{Editor-Level Control Flow Modules}.
+@item @file{lastfile.c} @tab @ref{Low-Level Modules}.
+@item @file{libinterface.c} @tab
+@item @file{libinterface.h} @tab
+@item @file{libsst.c} @tab @ref{Modules for Interfacing with the Operating System}.
+@item @file{libsst.h} @tab @ref{Modules for Interfacing with the Operating System}.
+@item @file{libst.h} @tab @ref{Modules for Interfacing with the Operating System}.
+@item @file{line-number.c} @tab
+@item @file{line-number.h} @tab
+@item @file{linuxplay.c} @tab @ref{Modules for Interfacing with the Operating System}.
+@item @file{lisp-disunion.h} @tab @ref{Basic Lisp Modules}.
+@item @file{lisp-tests.el} @tab @ref{Modules for Regression Testing}.
+@item @file{lisp-union.h} @tab @ref{Basic Lisp Modules}.
+@item @file{lisp.h} @tab @ref{Basic Lisp Modules}.
+@item @file{lread.c} @tab @ref{Basic Lisp Modules}.
+@item @file{lrecord.h} @tab @ref{Basic Lisp Modules}.
+@item @file{lstream.c} @tab @ref{Modules for Interfacing with the File System}.
+@item @file{lstream.h} @tab @ref{Modules for Interfacing with the File System}.
+@item @file{macros.c} @tab @ref{Editor-Level Control Flow Modules}.
+@item @file{macros.h} @tab @ref{Editor-Level Control Flow Modules}.
+@item @file{make-src-depend} @tab
+@item @file{malloc.c} @tab @ref{Low-Level Modules}.
+@item @file{marker.c} @tab @ref{Modules for Standard Editing Operations}.
+@item @file{md5-tests.el} @tab @ref{Modules for Regression Testing}.
+@item @file{md5.c} @tab @ref{Modules for Other Aspects of the Lisp Interpreter and Object System}.
+@item @file{mem-limits.h} @tab @ref{Low-Level Modules}.
+@item @file{menubar-gtk.c} @tab
+@item @file{menubar-msw.c} @tab @ref{Modules for other Display-Related Lisp Objects}.
+@item @file{menubar-msw.h} @tab @ref{Modules for other Display-Related Lisp Objects}.
+@item @file{menubar-x.c} @tab @ref{Modules for other Display-Related Lisp Objects}.
+@item @file{menubar.c} @tab @ref{Modules for other Display-Related Lisp Objects}.
+@item @file{menubar.h} @tab @ref{Modules for other Display-Related Lisp Objects}.
+@item @file{minibuf.c} @tab @ref{Editor-Level Control Flow Modules}.
+@item @file{miscplay.c} @tab
+@item @file{miscplay.h} @tab
+@item @file{mule-canna.c} @tab @ref{Modules for Internationalization}.
+@item @file{mule-ccl.c} @tab @ref{Modules for Internationalization}.
+@item @file{mule-ccl.h} @tab
+@item @file{mule-charset.c} @tab @ref{Modules for Internationalization}.
+@item @file{mule-charset.h} @tab @ref{Modules for Internationalization}.
+@item @file{mule-coding.c} @tab @ref{Modules for Internationalization}.
+@item @file{mule-mcpath.c} @tab @ref{Modules for Internationalization}.
+@item @file{mule-mcpath.h} @tab @ref{Modules for Internationalization}.
+@item @file{mule-tests.el} @tab @ref{Modules for Regression Testing}.
+@item @file{mule-wnnfns.c} @tab @ref{Modules for Internationalization}.
+@item @file{mule.c} @tab @ref{Modules for Internationalization}.
+@item @file{nas.c} @tab @ref{Modules for Interfacing with the Operating System}.
+@item @file{native-gtk-toolbar.c} @tab
+@item @file{ndir.h} @tab @ref{Modules for Interfacing with the File System}.
+@item @file{nsselect.m} @tab
+@item @file{nt.c} @tab
+@item @file{ntheap.c} @tab
+@item @file{ntplay.c} @tab
+@item @file{number-gmp.c} @tab
+@item @file{number-gmp.h} @tab
+@item @file{number-mp.c} @tab
+@item @file{number-mp.h} @tab
+@item @file{number.c} @tab
+@item @file{number.h} @tab
+@item @file{objects-gtk-impl.h} @tab
+@item @file{objects-gtk.c} @tab
+@item @file{objects-gtk.h} @tab
+@item @file{objects-impl.h} @tab
+@item @file{objects-msw-impl.h} @tab
+@item @file{objects-msw.c} @tab @ref{Modules for other Display-Related Lisp Objects}.
+@item @file{objects-msw.h} @tab @ref{Modules for other Display-Related Lisp Objects}.
+@item @file{objects-tty-impl.h} @tab
+@item @file{objects-tty.c} @tab @ref{Modules for other Display-Related Lisp Objects}.
+@item @file{objects-tty.h} @tab @ref{Modules for other Display-Related Lisp Objects}.
+@item @file{objects-x-impl.h} @tab
+@item @file{objects-x.c} @tab @ref{Modules for other Display-Related Lisp Objects}.
+@item @file{objects-x.h} @tab @ref{Modules for other Display-Related Lisp Objects}.
+@item @file{objects.c} @tab @ref{Modules for other Display-Related Lisp Objects}.
+@item @file{objects.h} @tab @ref{Modules for other Display-Related Lisp Objects}.
+@item @file{offix-cursors.h} @tab
+@item @file{offix-types.h} @tab
+@item @file{offix.c} @tab
+@item @file{offix.h} @tab
+@item @file{opaque.c} @tab @ref{Modules for Other Aspects of the Lisp Interpreter and Object System}.
+@item @file{opaque.h} @tab @ref{Modules for Other Aspects of the Lisp Interpreter and Object System}.
+@item @file{paths.h.in} @tab
+@item @file{paths.h} @tab @ref{Low-Level Modules}.
+@item @file{ppc.ldscript} @tab
+@item @file{pre-crt0.c} @tab @ref{Low-Level Modules}.
+@item @file{print.c} @tab @ref{Basic Lisp Modules}.
+@item @file{process-nt.c} @tab
+@item @file{process-slots.h} @tab
+@item @file{process-unix.c} @tab
+@item @file{process.c} @tab @ref{Modules for Interfacing with the Operating System}.
+@item @file{process.el} @tab @ref{Modules for Interfacing with the Operating System}.
+@item @file{process.h} @tab @ref{Modules for Interfacing with the Operating System}.
+@item @file{procimpl.h} @tab
+@item @file{profile.c.orig} @tab
+@item @file{profile.c.rej} @tab
+@item @file{profile.c} @tab
+@item @file{profile.h} @tab
+@item @file{ralloc.c} @tab @ref{Low-Level Modules}.
+@item @file{rangetab.c} @tab @ref{Modules for Other Aspects of the Lisp Interpreter and Object System}.
+@item @file{rangetab.h} @tab
+@item @file{realpath.c} @tab @ref{Modules for Interfacing with the File System}.
+@item @file{redisplay-gtk.c} @tab
+@item @file{redisplay-msw.c} @tab @ref{Modules for the Redisplay Mechanism}.
+@item @file{redisplay-output.c} @tab @ref{Modules for the Redisplay Mechanism}.
+@item @file{redisplay-tty.c} @tab @ref{Modules for the Redisplay Mechanism}.
+@item @file{redisplay-x.c} @tab @ref{Modules for the Redisplay Mechanism}.
+@item @file{redisplay.c} @tab @ref{Modules for the Redisplay Mechanism}.
+@item @file{redisplay.h} @tab @ref{Modules for the Redisplay Mechanism}.
+@item @file{regex.c} @tab @ref{Modules for Standard Editing Operations}.
+@item @file{regex.h} @tab @ref{Modules for Standard Editing Operations}.
+@item @file{regexp-tests.el} @tab @ref{Modules for Regression Testing}.
+@item @file{scrollbar-gtk.c} @tab
+@item @file{scrollbar-gtk.h} @tab
+@item @file{scrollbar-msw.c} @tab @ref{Modules for other Display-Related Lisp Objects}.
+@item @file{scrollbar-msw.h} @tab @ref{Modules for other Display-Related Lisp Objects}.
+@item @file{scrollbar-x.c} @tab @ref{Modules for other Display-Related Lisp Objects}.
+@item @file{scrollbar-x.h} @tab @ref{Modules for other Display-Related Lisp Objects}.
+@item @file{scrollbar.c} @tab @ref{Modules for other Display-Related Lisp Objects}.
+@item @file{scrollbar.h} @tab @ref{Modules for other Display-Related Lisp Objects}.
+@item @file{search.c} @tab @ref{Modules for Standard Editing Operations}.
+@item @file{select-common.h} @tab
+@item @file{select-gtk.c} @tab
+@item @file{select-msw.c} @tab @ref{Modules for Interfacing with X Windows}.
+@item @file{select-x.c} @tab @ref{Modules for Interfacing with X Windows}.
+@item @file{select.c} @tab @ref{Modules for Interfacing with X Windows}.
+@item @file{select.h} @tab @ref{Modules for Interfacing with X Windows}.
+@item @file{sgiplay.c} @tab @ref{Modules for Interfacing with the Operating System}.
+@item @file{sheap.c} @tab
+@item @file{signal.c} @tab @ref{Low-Level Modules}.
+@item @file{sound.c} @tab @ref{Modules for Interfacing with the Operating System}.
+@item @file{sound.h} @tab
+@item @file{specifier.c} @tab @ref{Modules for Other Aspects of the Lisp Interpreter and Object System}.
+@item @file{specifier.h} @tab @ref{Modules for Other Aspects of the Lisp Interpreter and Object System}.
+@item @file{src-headers} @tab
+@item @file{strcat.c} @tab
+@item @file{strcmp.c} @tab @ref{Modules for Interfacing with the Operating System}.
+@item @file{strcpy.c} @tab @ref{Modules for Interfacing with the Operating System}.
+@item @file{strftime.c} @tab
+@item @file{sunOS-fix.c} @tab @ref{Modules for Interfacing with the Operating System}.
+@item @file{sunplay.c} @tab @ref{Modules for Interfacing with the Operating System}.
+@item @file{sunpro.c} @tab @ref{Modules for Interfacing with the Operating System}.
+@item @file{symbol-tests.el} @tab @ref{Modules for Regression Testing}.
+@item @file{symbols.c} @tab @ref{Basic Lisp Modules}.
+@item @file{symeval.h} @tab @ref{Basic Lisp Modules}.
+@item @file{symsinit.h} @tab @ref{Basic Lisp Modules}.
+@item @file{syntax-tests.el} @tab @ref{Modules for Regression Testing}.
+@item @file{syntax.c} @tab @ref{Modules for Other Aspects of the Lisp Interpreter and Object System}.
+@item @file{syntax.h} @tab @ref{Modules for Other Aspects of the Lisp Interpreter and Object System}.
+@item @file{sysdep.c} @tab @ref{Modules for Interfacing with the Operating System}.
+@item @file{sysdep.h} @tab @ref{Modules for Interfacing with the Operating System}.
+@item @file{sysdir.h} @tab @ref{Modules for Interfacing with the Operating System}.
+@item @file{sysdll.c} @tab
+@item @file{sysdll.h} @tab
+@item @file{sysfile.h} @tab @ref{Modules for Interfacing with the Operating System}.
+@item @file{sysfloat.h} @tab @ref{Modules for Interfacing with the Operating System}.
+@item @file{sysproc.h} @tab @ref{Modules for Interfacing with the Operating System}.
+@item @file{syspwd.h} @tab @ref{Modules for Interfacing with the Operating System}.
+@item @file{syssignal.h} @tab @ref{Modules for Interfacing with the Operating System}.
+@item @file{systime.h} @tab @ref{Modules for Interfacing with the Operating System}.
+@item @file{systty.h} @tab @ref{Modules for Interfacing with the Operating System}.
+@item @file{syswait.h} @tab @ref{Modules for Interfacing with the Operating System}.
+@item @file{syswindows.h} @tab
+@item @file{tag-tests.el} @tab @ref{Modules for Regression Testing}.
+@item @file{termcap.c} @tab @ref{Modules for the Redisplay Mechanism}.
+@item @file{terminfo.c} @tab @ref{Modules for the Redisplay Mechanism}.
+@item @file{test-harness.el} @tab @ref{Modules for Regression Testing}.
+@item @file{tests.c} @tab
+@item @file{text.c} @tab
+@item @file{text.h} @tab
+@item @file{toolbar-common.c} @tab
+@item @file{toolbar-common.h} @tab
+@item @file{toolbar-gtk.c} @tab
+@item @file{toolbar-msw.c} @tab @ref{Modules for other Display-Related Lisp Objects}.
+@item @file{toolbar-x.c} @tab @ref{Modules for other Display-Related Lisp Objects}.
+@item @file{toolbar.c} @tab @ref{Modules for other Display-Related Lisp Objects}.
+@item @file{toolbar.h} @tab @ref{Modules for other Display-Related Lisp Objects}.
+@item @file{tooltalk.c} @tab @ref{Modules for Interfacing with the Operating System}.
+@item @file{tooltalk.h} @tab @ref{Modules for Interfacing with the Operating System}.
+@item @file{tparam.c} @tab @ref{Modules for the Redisplay Mechanism}.
+@item @file{ui-byhand.c} @tab
+@item @file{ui-gtk.c} @tab
+@item @file{ui-gtk.h} @tab
+@item @file{undo.c} @tab @ref{Modules for Standard Editing Operations}.
+@item @file{unexaix.c} @tab @ref{Low-Level Modules}.
+@item @file{unexalpha.c} @tab @ref{Low-Level Modules}.
+@item @file{unexapollo.c} @tab @ref{Low-Level Modules}.
+@item @file{unexconvex.c} @tab @ref{Low-Level Modules}.
+@item @file{unexcw.c} @tab
+@item @file{unexec.c} @tab @ref{Low-Level Modules}.
+@item @file{unexelf.c} @tab @ref{Low-Level Modules}.
+@item @file{unexelfsgi.c} @tab @ref{Low-Level Modules}.
+@item @file{unexencap.c} @tab @ref{Low-Level Modules}.
+@item @file{unexenix.c} @tab @ref{Low-Level Modules}.
+@item @file{unexfreebsd.c} @tab @ref{Low-Level Modules}.
+@item @file{unexfx2800.c} @tab @ref{Low-Level Modules}.
+@item @file{unexhp9k3.c} @tab @ref{Low-Level Modules}.
+@item @file{unexhp9k800.c} @tab @ref{Low-Level Modules}.
+@item @file{unexmips.c} @tab @ref{Low-Level Modules}.
+@item @file{unexnext.c} @tab @ref{Low-Level Modules}.
+@item @file{unexnt.c} @tab
+@item @file{unexsni.c} @tab
+@item @file{unexsol2-6.c} @tab
+@item @file{unexsol2.c} @tab @ref{Low-Level Modules}.
+@item @file{unexsunos4.c} @tab @ref{Low-Level Modules}.
+@item @file{unicode.c} @tab
+@item @file{universe.h} @tab @ref{Low-Level Modules}.
+@item @file{vm-limit.c} @tab @ref{Low-Level Modules}.
+@item @file{weak-tests.el} @tab @ref{Modules for Regression Testing}.
+@item @file{widget.c} @tab
+@item @file{win32.c} @tab
+@item @file{window-impl.h} @tab
+@item @file{window.c} @tab @ref{Modules for the Basic Displayable Lisp Objects}.
+@item @file{window.h} @tab @ref{Modules for the Basic Displayable Lisp Objects}.
+@item @file{winslots.h} @tab
+@item @file{xemacs.def.in.in} @tab
+@item @file{xgccache.c} @tab @ref{Modules for Interfacing with X Windows}.
+@item @file{xgccache.h} @tab @ref{Modules for Interfacing with X Windows}.
+@item @file{xintrinsic.h} @tab @ref{Modules for Interfacing with X Windows}.
+@item @file{xintrinsicp.h} @tab @ref{Modules for Interfacing with X Windows}.
+@item @file{xmmanagerp.h} @tab @ref{Modules for Interfacing with X Windows}.
+@item @file{xmotif.h} @tab
+@item @file{xmprimitivep.h} @tab @ref{Modules for Interfacing with X Windows}.
+@item @file{xmu.c} @tab @ref{Modules for Interfacing with X Windows}.
+@item @file{xmu.h} @tab @ref{Modules for Interfacing with X Windows}.
+@end multitable
+
+
+
+@node Low-Level Modules, Basic Lisp Modules, A Summary of the Various XEmacs Modules, The Modules of XEmacs
 @section Low-Level Modules
 @cindex low-level modules
 @cindex modules, low-level
 
 @example
-config.h
+@file{config.h}
 @end example
 
 This is automatically generated from @file{config.h.in} based on the
@@ -4216,7 +4672,7 @@
 
 
 @example
-paths.h
+@file{paths.h}
 @end example
 
 This is automatically generated from @file{paths.h.in} based on supplied
@@ -4226,8 +4682,8 @@
 
 
 @example
-emacs.c
-signal.c
+@file{emacs.c}
+@file{signal.c}
 @end example
 
 @file{emacs.c} contains @code{main()} and other code that performs the most
@@ -4247,23 +4703,23 @@
 
 
 @example
-unexaix.c
-unexalpha.c
-unexapollo.c
-unexconvex.c
-unexec.c
-unexelf.c
-unexelfsgi.c
-unexencap.c
-unexenix.c
-unexfreebsd.c
-unexfx2800.c
-unexhp9k3.c
-unexhp9k800.c
-unexmips.c
-unexnext.c
-unexsol2.c
-unexsunos4.c
+@file{unexaix.c}
+@file{unexalpha.c}
+@file{unexapollo.c}
+@file{unexconvex.c}
+@file{unexec.c}
+@file{unexelf.c}
+@file{unexelfsgi.c}
+@file{unexencap.c}
+@file{unexenix.c}
+@file{unexfreebsd.c}
+@file{unexfx2800.c}
+@file{unexhp9k3.c}
+@file{unexhp9k800.c}
+@file{unexmips.c}
+@file{unexnext.c}
+@file{unexsol2.c}
+@file{unexsunos4.c}
 @end example
 
 These modules contain code dumping out the XEmacs executable on various
@@ -4275,9 +4731,9 @@
 
 
 @example
-ecrt0.c
-lastfile.c
-pre-crt0.c
+@file{ecrt0.c}
+@file{lastfile.c}
+@file{pre-crt0.c}
 @end example
 
 These modules are used in conjunction with the dump mechanism.  On some
@@ -4302,14 +4758,14 @@
 
 
 @example
-alloca.c
-free-hook.c
-getpagesize.h
-gmalloc.c
-malloc.c
-mem-limits.h
-ralloc.c
-vm-limit.c
+@file{alloca.c}
+@file{free-hook.c}
+@file{getpagesize.h}
+@file{gmalloc.c}
+@file{malloc.c}
+@file{mem-limits.h}
+@file{ralloc.c}
+@file{vm-limit.c}
 @end example
 
 These handle basic C allocation of memory.  @file{alloca.c} is an emulation of
@@ -4365,9 +4821,9 @@
 
 
 @example
-blocktype.c
-blocktype.h
-dynarr.c
+@file{blocktype.c}
+@file{blocktype.h}
+@file{dynarr.c}
 @end example
 
 These implement a couple of basic C data types to facilitate memory
@@ -4391,7 +4847,7 @@
 
 
 @example
-inline.c
+@file{inline.c}
 @end example
 
 This module is used in connection with inline functions (available in
@@ -4405,8 +4861,8 @@
 
 
 @example
-debug.c
-debug.h
+@file{debug.c}
+@file{debug.h}
 @end example
 
 These functions provide a system for doing internal consistency checks
@@ -4417,24 +4873,24 @@
 
 
 @example
-universe.h
+@file{universe.h}
 @end example
 
 This is not currently used.
 
 
 
-@node Basic Lisp Modules
+@node Basic Lisp Modules, Modules for Standard Editing Operations, Low-Level Modules, The Modules of XEmacs
 @section Basic Lisp Modules
 @cindex Lisp modules, basic
 @cindex modules, basic Lisp
 
 @example
-lisp-disunion.h
-lisp-union.h
-lisp.h
-lrecord.h
-symsinit.h
+@file{lisp-disunion.h}
+@file{lisp-union.h}
+@file{lisp.h}
+@file{lrecord.h}
+@file{symsinit.h}
 @end example
 
 These are the basic header files for all XEmacs modules.  Each module
@@ -4477,7 +4933,7 @@
 
 
 @example
-alloc.c
+@file{alloc.c}
 @end example
 
 The large module @file{alloc.c} implements all of the basic allocation and
@@ -4505,8 +4961,8 @@
 
 
 @example
-eval.c
-backtrace.h
+@file{eval.c}
+@file{backtrace.h}
 @end example
 
 This module contains all of the functions to handle the flow of control.
@@ -4525,7 +4981,7 @@
 
 
 @example
-lread.c
+@file{lread.c}
 @end example
 
 This module implements the Lisp reader and the @code{read} function,
@@ -4536,7 +4992,7 @@
 
 
 @example
-print.c
+@file{print.c}
 @end example
 
 This module implements the Lisp print mechanism and the @code{print}
@@ -4548,9 +5004,9 @@
 
 
 @example
-general.c
-symbols.c
-symeval.h
+@file{general.c}
+@file{symbols.c}
+@file{symeval.h}
 @end example
 
 @file{symbols.c} implements the handling of symbols, obarrays, and
@@ -4568,9 +5024,9 @@
 
 
 @example
-data.c
-floatfns.c
-fns.c
+@file{data.c}
+@file{floatfns.c}
+@file{fns.c}
 @end example
 
 These modules implement the methods and standard Lisp primitives for all
@@ -4589,8 +5045,8 @@
 
 
 @example
-bytecode.c
-bytecode.h
+@file{bytecode.c}
+@file{bytecode.h}
 @end example
 
 @file{bytecode.c} implements the byte-code interpreter and
@@ -4600,15 +5056,15 @@
 
 
 
-@node Modules for Standard Editing Operations
+@node Modules for Standard Editing Operations, Modules for Interfacing with the File System, Basic Lisp Modules, The Modules of XEmacs
 @section Modules for Standard Editing Operations
 @cindex modules for standard editing operations
 @cindex editing operations, modules for standard
 
 @example
-buffer.c
-buffer.h
-bufslots.h
+@file{buffer.c}
+@file{buffer.h}
+@file{bufslots.h}
 @end example
 
 @file{buffer.c} implements the @dfn{buffer} Lisp object type.  This
@@ -4637,8 +5093,8 @@
 
 
 @example
-insdel.c
-insdel.h
+@file{insdel.c}
+@file{insdel.h}
 @end example
 
 @file{insdel.c} contains low-level functions for inserting and deleting text in
@@ -4652,7 +5108,7 @@
 
 
 @example
-marker.c
+@file{marker.c}
 @end example
 
 This module implements the @dfn{marker} Lisp object type, which
@@ -4671,8 +5127,8 @@
 
 
 @example
-extents.c
-extents.h
+@file{extents.c}
+@file{extents.h}
 @end example
 
 This module implements the @dfn{extent} Lisp object type, which is like
@@ -4692,7 +5148,7 @@
 
 
 @example
-editfns.c
+@file{editfns.c}
 @end example
 
 @file{editfns.c} contains the standard Lisp primitives for working with
@@ -4709,9 +5165,9 @@
 
 
 @example
-callint.c
-cmds.c
-commands.h
+@file{callint.c}
+@file{cmds.c}
+@file{commands.h}
 @end example
 
 @cindex interactive
@@ -4738,9 +5194,9 @@
 
 
 @example
-regex.c
-regex.h
-search.c
+@file{regex.c}
+@file{regex.h}
+@file{search.c}
 @end example
 
 @file{search.c} implements the Lisp primitives for searching for text in
@@ -4755,7 +5211,7 @@
 
 
 @example
-doprnt.c
+@file{doprnt.c}
 @end example
 
 @file{doprnt.c} implements formatted-string processing, similar to
@@ -4764,390 +5220,22 @@
 
 
 @example
-undo.c
+@file{undo.c}
 @end example
 
 This module implements the undo mechanism for tracking buffer changes.
 Most of this could be implemented in Lisp.
 
 
-
-@node Editor-Level Control Flow Modules
-@section Editor-Level Control Flow Modules
-@cindex control flow modules, editor-level
-@cindex modules, editor-level control flow
-
-@example
-event-Xt.c
-event-msw.c
-event-stream.c
-event-tty.c
-events-mod.h
-gpmevent.c
-gpmevent.h
-events.c
-events.h
-@end example
-
-These implement the handling of events (user input and other system
-notifications).
-
-@file{events.c} and @file{events.h} define the @dfn{event} Lisp object
-type and primitives for manipulating it.
-
-@file{event-stream.c} implements the basic functions for working with
-event queues, dispatching an event by looking it up in relevant keymaps
-and such, and handling timeouts; this includes the primitives
-@code{next-event} and @code{dispatch-event}, as well as related
-primitives such as @code{sit-for}, @code{sleep-for}, and
-@code{accept-process-output}. (@file{event-stream.c} is one of the
-hairiest and trickiest modules in XEmacs.  Beware!  You can easily mess
-things up here.)
-
-@file{event-Xt.c} and @file{event-tty.c} implement the low-level
-interfaces onto retrieving events from Xt (the X toolkit) and from TTY's
-(using @code{read()} and @code{select()}), respectively.  The event
-interface enforces a clean separation between the specific code for
-interfacing with the operating system and the generic code for working
-with events, by defining an API of basic, low-level event methods;
-@file{event-Xt.c} and @file{event-tty.c} are two different
-implementations of this API.  To add support for a new operating system
-(e.g. NeXTstep), one merely needs to provide another implementation of
-those API functions.
-
-Note that the choice of whether to use @file{event-Xt.c} or
-@file{event-tty.c} is made at compile time!  Or at the very latest, it
-is made at startup time.  @file{event-Xt.c} handles events for
-@emph{both} X and TTY frames; @file{event-tty.c} is only used when X
-support is not compiled into XEmacs.  The reason for this is that there
-is only one event loop in XEmacs: thus, it needs to be able to receive
-events from all different kinds of frames.
-
-
-
-@example
-keymap.c
-keymap.h
-@end example
-
-@file{keymap.c} and @file{keymap.h} define the @dfn{keymap} Lisp object
-type and associated methods and primitives. (Remember that keymaps are
-objects that associate event descriptions with functions to be called to
-``execute'' those events; @code{dispatch-event} looks up events in the
-relevant keymaps.)
-
-
-
-@example
-cmdloop.c
-@end example
-
-@file{cmdloop.c} contains functions that implement the actual editor
-command loop---i.e. the event loop that cyclically retrieves and
-dispatches events.  This code is also rather tricky, just like
-@file{event-stream.c}.
-
-
-
-@example
-macros.c
-macros.h
-@end example
-
-These two modules contain the basic code for defining keyboard macros.
-These functions don't actually do much; most of the code that handles keyboard
-macros is mixed in with the event-handling code in @file{event-stream.c}.
-
-
-
-@example
-minibuf.c
-@end example
-
-This contains some miscellaneous code related to the minibuffer (most of
-the minibuffer code was moved into Lisp by Richard Mlynarik).  This
-includes the primitives for completion (although filename completion is
-in @file{dired.c}), the lowest-level interface to the minibuffer (if the
-command loop were cleaned up, this too could be in Lisp), and code for
-dealing with the echo area (this, too, was mostly moved into Lisp, and
-the only code remaining is code to call out to Lisp or provide simple
-bootstrapping implementations early in temacs, before the echo-area Lisp
-code is loaded).
-
-
-
-@node Modules for the Basic Displayable Lisp Objects
-@section Modules for the Basic Displayable Lisp Objects
-@cindex modules for the basic displayable Lisp objects
-@cindex displayable Lisp objects, modules for the basic
-@cindex Lisp objects, modules for the basic displayable
-@cindex objects, modules for the basic displayable Lisp
-
-@example
-console-msw.c
-console-msw.h
-console-stream.c
-console-stream.h
-console-tty.c
-console-tty.h
-console-x.c
-console-x.h
-console.c
-console.h
-@end example
-
-These modules implement the @dfn{console} Lisp object type.  A console
-contains multiple display devices, but only one keyboard and mouse.
-Most of the time, a console will contain exactly one device.
-
-Consoles are the top of a lisp object inclusion hierarchy.  Consoles
-contain devices, which contain frames, which contain windows.
-
-
-
-@example
-device-msw.c
-device-tty.c
-device-x.c
-device.c
-device.h
-@end example
-
-These modules implement the @dfn{device} Lisp object type.  This
-abstracts a particular screen or connection on which frames are
-displayed.  As with Lisp objects, event interfaces, and other
-subsystems, the device code is separated into a generic component that
-contains a standardized interface (in the form of a set of methods) onto
-particular device types.
-
-The device subsystem defines all the methods and provides method
-services for not only device operations but also for the frame, window,
-menubar, scrollbar, toolbar, and other displayable-object subsystems.
-The reason for this is that all of these subsystems have the same
-subtypes (X, TTY, NeXTstep, Microsoft Windows, etc.) as devices do.
-
-
-
-@example
-frame-msw.c
-frame-tty.c
-frame-x.c
-frame.c
-frame.h
-@end example
-
-Each device contains one or more frames in which objects (e.g. text) are
-displayed.  A frame corresponds to a window in the window system;
-usually this is a top-level window but it could potentially be one of a
-number of overlapping child windows within a top-level window, using the
-MDI (Multiple Document Interface) protocol in Microsoft Windows or a
-similar scheme.
-
-The @file{frame-*} files implement the @dfn{frame} Lisp object type and
-provide the generic and device-type-specific operations on frames
-(e.g. raising, lowering, resizing, moving, etc.).
-
-
-
-@example
-window.c
-window.h
-@end example
-
-@cindex window (in Emacs)
-@cindex pane
-Each frame consists of one or more non-overlapping @dfn{windows} (better
-known as @dfn{panes} in standard window-system terminology) in which a
-buffer's text can be displayed.  Windows can also have scrollbars
-displayed around their edges.
-
-@file{window.c} and @file{window.h} implement the @dfn{window} Lisp
-object type and provide code to manage windows.  Since windows have no
-associated resources in the window system (the window system knows only
-about the frame; no child windows or anything are used for XEmacs
-windows), there is no device-type-specific code here; all of that code
-is part of the redisplay mechanism or the code for particular object
-types such as scrollbars.
-
-
-
-@node Modules for other Display-Related Lisp Objects
-@section Modules for other Display-Related Lisp Objects
-@cindex modules for other display-related Lisp objects
-@cindex display-related Lisp objects, modules for other
-@cindex Lisp objects, modules for other display-related
-
-@example
-faces.c
-faces.h
-@end example
-
-
-
-@example
-bitmaps.h
-glyphs-eimage.c
-glyphs-msw.c
-glyphs-msw.h
-glyphs-widget.c
-glyphs-x.c
-glyphs-x.h
-glyphs.c
-glyphs.h
-@end example
-
-
-
-@example
-objects-msw.c
-objects-msw.h
-objects-tty.c
-objects-tty.h
-objects-x.c
-objects-x.h
-objects.c
-objects.h
-@end example
-
-
-
-@example
-menubar-msw.c
-menubar-msw.h
-menubar-x.c
-menubar.c
-menubar.h
-@end example
-
-
-
-@example
-scrollbar-msw.c
-scrollbar-msw.h
-scrollbar-x.c
-scrollbar-x.h
-scrollbar.c
-scrollbar.h
-@end example
-
-
-
-@example
-toolbar-msw.c
-toolbar-x.c
-toolbar.c
-toolbar.h
-@end example
-
-
-
-@example
-font-lock.c
-@end example
-
-This file provides C support for syntax highlighting---i.e.
-highlighting different syntactic constructs of a source file in
-different colors, for easy reading.  The C support is provided so that
-this is fast.
-
-
-
-@example
-dgif_lib.c
-gif_err.c
-gif_lib.h
-gifalloc.c
-@end example
-
-These modules decode GIF-format image files, for use with glyphs.
-These files were removed due to Unisys patent infringement concerns.
-
-
-
-@node Modules for the Redisplay Mechanism
-@section Modules for the Redisplay Mechanism
-@cindex modules for the redisplay mechanism
-@cindex redisplay mechanism, modules for the
-
-@example
-redisplay-output.c
-redisplay-msw.c
-redisplay-tty.c
-redisplay-x.c
-redisplay.c
-redisplay.h
-@end example
-
-These files provide the redisplay mechanism.  As with many other
-subsystems in XEmacs, there is a clean separation between the general
-and device-specific support.
-
-@file{redisplay.c} contains the bulk of the redisplay engine.  These
-functions update the redisplay structures (which describe how the screen
-is to appear) to reflect any changes made to the state of any
-displayable objects (buffer, frame, window, etc.) since the last time
-that redisplay was called.  These functions are highly optimized to
-avoid doing more work than necessary (since redisplay is called
-extremely often and is potentially a huge time sink), and depend heavily
-on notifications from the objects themselves that changes have occurred,
-so that redisplay doesn't explicitly have to check each possible object.
-The redisplay mechanism also contains a great deal of caching to further
-speed things up; some of this caching is contained within the various
-displayable objects.
-
-@file{redisplay-output.c} goes through the redisplay structures and converts
-them into calls to device-specific methods to actually output the screen
-changes.
-
-@file{redisplay-x.c} and @file{redisplay-tty.c} are two implementations
-of these redisplay output methods, for X frames and TTY frames,
-respectively.
-
-
-
-@example
-indent.c
-@end example
-
-This module contains various functions and Lisp primitives for
-converting between buffer positions and screen positions.  These
-functions call the redisplay mechanism to do most of the work, and then
-examine the redisplay structures to get the necessary information.  This
-module needs work.
-
-
-
-@example
-termcap.c
-terminfo.c
-tparam.c
-@end example
-
-These files contain functions for working with the termcap (BSD-style)
-and terminfo (System V style) databases of terminal capabilities and
-escape sequences, used when XEmacs is displaying in a TTY.
-
-
-
-@example
-cm.c
-cm.h
-@end example
-
-These files provide some miscellaneous TTY-output functions and should
-probably be merged into @file{redisplay-tty.c}.
-
-
-
-@node Modules for Interfacing with the File System
+@node Modules for Interfacing with the File System, Modules for Other Aspects of the Lisp Interpreter and Object System, Modules for Standard Editing Operations, The Modules of XEmacs
 @section Modules for Interfacing with the File System
 @cindex modules for interfacing with the file system
 @cindex interfacing with the file system, modules for
 @cindex file system, modules for interfacing with the
 
 @example
-lstream.c
-lstream.h
+@file{lstream.c}
+@file{lstream.h}
 @end example
 
 These modules implement the @dfn{stream} Lisp object type.  This is an
@@ -5174,7 +5262,7 @@
 
 
 @example
-fileio.c
+@file{fileio.c}
 @end example
 
 This implements the basic primitives for interfacing with the file
@@ -5191,7 +5279,7 @@
 
 
 @example
-filelock.c
+@file{filelock.c}
 @end example
 
 This file provides functions for detecting clashes between different
@@ -5206,7 +5294,7 @@
 
 
 @example
-filemode.c
+@file{filemode.c}
 @end example
 
 This file provides some miscellaneous functions that construct a
@@ -5217,8 +5305,8 @@
 
 
 @example
-dired.c
-ndir.h
+@file{dired.c}
+@file{ndir.h}
 @end example
 
 These files implement the XEmacs interface to directory searching.  This
@@ -5234,7 +5322,7 @@
 
 
 @example
-realpath.c
+@file{realpath.c}
 @end example
 
 This file provides an implementation of the @code{realpath()} function
@@ -5243,7 +5331,7 @@
 
 
 
-@node Modules for Other Aspects of the Lisp Interpreter and Object System
+@node Modules for Other Aspects of the Lisp Interpreter and Object System, Modules for Interfacing with the Operating System, Modules for Interfacing with the File System, The Modules of XEmacs
 @section Modules for Other Aspects of the Lisp Interpreter and Object System
 @cindex modules for other aspects of the Lisp interpreter and object system
 @cindex Lisp interpreter and object system, modules for other aspects of the
@@ -5251,10 +5339,10 @@
 @cindex object system, modules for other aspects of the Lisp interpreter and
 
 @example
-elhash.c
-elhash.h
-hash.c
-hash.h
+@file{elhash.c}
+@file{elhash.h}
+@file{hash.c}
+@file{hash.h}
 @end example
 
 These files provide two implementations of hash tables.  Files
@@ -5267,8 +5355,8 @@
 
 
 @example
-specifier.c
-specifier.h
+@file{specifier.c}
+@file{specifier.h}
 @end example
 
 This module implements the @dfn{specifier} Lisp object type.  This is
@@ -5284,9 +5372,9 @@
 
 
 @example
-chartab.c
-chartab.h
-casetab.c
+@file{chartab.c}
+@file{chartab.h}
+@file{casetab.c}
 @end example
 
 @file{chartab.c} and @file{chartab.h} implement the @dfn{char table}
@@ -5306,8 +5394,8 @@
 
 
 @example
-syntax.c
-syntax.h
+@file{syntax.c}
+@file{syntax.h}
 @end example
 
 @cindex scanner
@@ -5376,7 +5464,7 @@
 
 
 @example
-casefiddle.c
+@file{casefiddle.c}
 @end example
 
 This module implements various Lisp primitives for upcasing, downcasing
@@ -5385,7 +5473,7 @@
 
 
 @example
-rangetab.c
+@file{rangetab.c}
 @end example
 
 This module implements the @dfn{range table} Lisp object type, which
@@ -5395,8 +5483,8 @@
 
 
 @example
-opaque.c
-opaque.h
+@file{opaque.c}
+@file{opaque.h}
 @end example
 
 This module implements the @dfn{opaque} Lisp object type, an
@@ -5418,7 +5506,7 @@
 
 
 @example
-abbrev.c
+@file{abbrev.c}
 @end example
 
 This function provides a few primitives for doing dynamic abbreviation
@@ -5431,7 +5519,7 @@
 
 
 @example
-doc.c
+@file{doc.c}
 @end example
 
 This function provides primitives for retrieving the documentation
@@ -5450,7 +5538,7 @@
 
 
 @example
-md5.c
+@file{md5.c}
 @end example
 
 This function provides a Lisp primitive that implements the MD5 secure
@@ -5461,16 +5549,16 @@
 
 
 
-@node Modules for Interfacing with the Operating System
+@node Modules for Interfacing with the Operating System,  , Modules for Other Aspects of the Lisp Interpreter and Object System, The Modules of XEmacs
 @section Modules for Interfacing with the Operating System
 @cindex modules for interfacing with the operating system
 @cindex interfacing with the operating system, modules for
 @cindex operating system, modules for interfacing with the
 
 @example
-process.el
-process.c
-process.h
+@file{process.el}
+@file{process.c}
+@file{process.h}
 @end example
 
 These modules allow XEmacs to spawn and communicate with subprocesses
@@ -5515,8 +5603,8 @@
 
 
 @example
-sysdep.c
-sysdep.h
+@file{sysdep.c}
+@file{sysdep.h}
 @end example
 
   These modules implement most of the low-level, messy operating-system
@@ -5529,15 +5617,15 @@
 
 
 @example
-sysdir.h
-sysfile.h
-sysfloat.h
-sysproc.h
-syspwd.h
-syssignal.h
-systime.h
-systty.h
-syswait.h
+@file{sysdir.h}
+@file{sysfile.h}
+@file{sysfloat.h}
+@file{sysproc.h}
+@file{syspwd.h}
+@file{syssignal.h}
+@file{systime.h}
+@file{systty.h}
+@file{syswait.h}
 @end example
 
 These header files provide consistent interfaces onto system-dependent
@@ -5592,15 +5680,15 @@
 
 
 @example
-hpplay.c
-libsst.c
-libsst.h
-libst.h
-linuxplay.c
-nas.c
-sgiplay.c
-sound.c
-sunplay.c
+@file{hpplay.c}
+@file{libsst.c}
+@file{libsst.h}
+@file{libst.h}
+@file{linuxplay.c}
+@file{nas.c}
+@file{sgiplay.c}
+@file{sound.c}
+@file{sunplay.c}
 @end example
 
 These files implement the ability to play various sounds on some types
@@ -5637,8 +5725,8 @@
 
 
 @example
-tooltalk.c
-tooltalk.h
+@file{tooltalk.c}
+@file{tooltalk.h}
 @end example
 
 These two modules implement an interface to the ToolTalk protocol, which
@@ -5654,7 +5742,7 @@
 
 
 @example
-getloadavg.c
+@file{getloadavg.c}
 @end example
 
 This module provides the ability to retrieve the system's current load
@@ -5664,7 +5752,7 @@
 
 
 @example
-sunpro.c
+@file{sunpro.c}
 @end example
 
 This module provides a small amount of code used internally at Sun to
@@ -5673,10 +5761,10 @@
 
 
 @example
-broken-sun.h
-strcmp.c
-strcpy.c
-sunOS-fix.c
+@file{broken-sun.h}
+@file{strcmp.c}
+@file{strcpy.c}
+@file{sunOS-fix.c}
 @end example
 
 These files provide replacement functions and prototypes to fix numerous
@@ -5685,302 +5773,38 @@
 
 
 @example
-hftctl.c
+@file{hftctl.c}
 @end example
 
 This module provides some terminal-control code necessary on versions of
 AIX prior to 4.1.
 
 
-
-@node Modules for Interfacing with X Windows
-@section Modules for Interfacing with X Windows
-@cindex modules for interfacing with X Windows
-@cindex interfacing with X Windows, modules for
-@cindex X Windows, modules for interfacing with
-
-@example
-Emacs.ad.h
-@end example
-
-A file generated from @file{Emacs.ad}, which contains XEmacs-supplied
-fallback resources (so that XEmacs has pretty defaults).
-
-
-
-@example
-EmacsFrame.c
-EmacsFrame.h
-EmacsFrameP.h
-@end example
-
-These modules implement an Xt widget class that encapsulates a frame.
-This is for ease in integrating with Xt.  The EmacsFrame widget covers
-the entire X window except for the menubar; the scrollbars are
-positioned on top of the EmacsFrame widget.
-
-@strong{Warning:} Abandon hope, all ye who enter here.  This code took
-an ungodly amount of time to get right, and is likely to fall apart
-mercilessly at the slightest change.  Such is life under Xt.
-
-
-
-@example
-EmacsManager.c
-EmacsManager.h
-EmacsManagerP.h
-@end example
-
-These modules implement a simple Xt manager (i.e. composite) widget
-class that simply lets its children set whatever geometry they want.
-It's amazing that Xt doesn't provide this standardly, but on second
-thought, it makes sense, considering how amazingly broken Xt is.
-
-
-@example
-EmacsShell-sub.c
-EmacsShell.c
-EmacsShell.h
-EmacsShellP.h
-@end example
-
-These modules implement two Xt widget classes that are subclasses of
-the TopLevelShell and TransientShell classes.  This is necessary to deal
-with more brokenness that Xt has sadistically thrust onto the backs of
-developers.
-
-
-
-@example
-xgccache.c
-xgccache.h
-@end example
-
-These modules provide functions for maintenance and caching of GC's
-(graphics contexts) under the X Window System.  This code is junky and
-needs to be rewritten.
-
-
-
-@example
-select-msw.c
-select-x.c
-select.c
-select.h
-@end example
-
-@cindex selections
-  This module provides an interface to the X Window System's concept of
-@dfn{selections}, the standard way for X applications to communicate
-with each other.
-
-
-
-@example
-xintrinsic.h
-xintrinsicp.h
-xmmanagerp.h
-xmprimitivep.h
-@end example
-
-These header files are similar in spirit to the @file{sys*.h} files and buffer
-against different implementations of Xt and Motif.
-
-@itemize @bullet
-@item
-@file{xintrinsic.h} should be included in place of @file{<Intrinsic.h>}.
-@item
-@file{xintrinsicp.h} should be included in place of @file{<IntrinsicP.h>}.
-@item
-@file{xmmanagerp.h} should be included in place of @file{<XmManagerP.h>}.
-@item
-@file{xmprimitivep.h} should be included in place of @file{<XmPrimitiveP.h>}.
-@end itemize
-
-
-
-@example
-xmu.c
-xmu.h
-@end example
-
-These files provide an emulation of the Xmu library for those systems
-(i.e. HPUX) that don't provide it as a standard part of X.
-
-
-
-@example
-ExternalClient-Xlib.c
-ExternalClient.c
-ExternalClient.h
-ExternalClientP.h
-ExternalShell.c
-ExternalShell.h
-ExternalShellP.h
-extw-Xlib.c
-extw-Xlib.h
-extw-Xt.c
-extw-Xt.h
-@end example
-
-@cindex external widget
-  These files provide the @dfn{external widget} interface, which allows an
-XEmacs frame to appear as a widget in another application.  To do this,
-you have to configure with @samp{--external-widget}.
-
-@file{ExternalShell*} provides the server (XEmacs) side of the
-connection.
-
-@file{ExternalClient*} provides the client (other application) side of
-the connection.  These files are not compiled into XEmacs but are
-compiled into libraries that are then linked into your application.
-
-@file{extw-*} is common code that is used for both the client and server.
-
-Don't touch this code; something is liable to break if you do.
-
-
-
-@node Modules for Internationalization
-@section Modules for Internationalization
-@cindex modules for internationalization
-@cindex internationalization, modules for
-
-@example
-mule-canna.c
-mule-ccl.c
-mule-charset.c
-mule-charset.h
-file-coding.c
-file-coding.h
-mule-coding.c
-mule-mcpath.c
-mule-mcpath.h
-mule-wnnfns.c
-mule.c
-@end example
-
-These files implement the MULE (Asian-language) support.  Note that MULE
-actually provides a general interface for all sorts of languages, not
-just Asian languages (although they are generally the most complicated
-to support).  This code is still in beta.
-
-@file{mule-charset.*} and @file{file-coding.*} provide the heart of the
-XEmacs MULE support.  @file{mule-charset.*} implements the @dfn{charset}
-Lisp object type, which encapsulates a character set (an ordered one- or
-two-dimensional set of characters, such as US ASCII or JISX0208 Japanese
-Kanji).
-
-@file{file-coding.*} implements the @dfn{coding-system} Lisp object
-type, which encapsulates a method of converting between different
-encodings.  An encoding is a representation of a stream of characters,
-possibly from multiple character sets, using a stream of bytes or words,
-and defines (e.g.) which escape sequences are used to specify particular
-character sets, how the indices for a character are converted into bytes
-(sometimes this involves setting the high bit; sometimes complicated
-rearranging of the values takes place, as in the Shift-JIS encoding),
-etc.  It also contains some generic coding system implementations, such
-as the binary (no-conversion) coding system and a sample gzip coding system.
-
-@file{mule-coding.c} contains the implementations of text coding systems.
-
-@file{mule-ccl.c} provides the CCL (Code Conversion Language)
-interpreter.  CCL is similar in spirit to Lisp byte code and is used to
-implement converters for custom encodings.
-
-@file{mule-canna.c} and @file{mule-wnnfns.c} implement interfaces to
-external programs used to implement the Canna and WNN input methods,
-respectively.  This is currently in beta.
-
-@file{mule-mcpath.c} provides some functions to allow for pathnames
-containing extended characters.  This code is fragmentary, obsolete, and
-completely non-working.  Instead, @code{pathname-coding-system} is used
-to specify conversions of names of files and directories.  The standard
-C I/O functions like @samp{open()} are wrapped so that conversion occurs
-automatically.
-
-@file{mule.c} contains a few miscellaneous things.  It currently seems
-to be unused and probably should be removed.
-
-
-
-@example
-intl.c
-@end example
-
-This provides some miscellaneous internationalization code for
-implementing message translation and interfacing to the Ximp input
-method.  None of this code is currently working.
-
-
-
-@example
-iso-wide.h
-@end example
-
-This contains leftover code from an earlier implementation of
-Asian-language support, and is not currently used.
-
-
-
-
-@node Modules for Regression Testing
-@section Modules for Regression Testing
-@cindex modules for regression testing
-@cindex regression testing, modules for
-
-@example
-test-harness.el
-base64-tests.el
-byte-compiler-tests.el
-case-tests.el
-ccl-tests.el
-c-tests.el
-database-tests.el
-extent-tests.el
-hash-table-tests.el
-lisp-tests.el
-md5-tests.el
-mule-tests.el
-regexp-tests.el
-symbol-tests.el
-syntax-tests.el
-tag-tests.el
-weak-tests.el
-@end example
-
-@file{test-harness.el} defines the macros @code{Assert},
-@code{Check-Error}, @code{Check-Error-Message}, and
-@code{Check-Message}.  The other files are test files, testing various
-XEmacs facilities.  @xref{Regression Testing XEmacs}.
-
-
-
-@node Allocation of Objects in XEmacs Lisp, Dumping, A Summary of the Various XEmacs Modules, Top
+@node Allocation of Objects in XEmacs Lisp, Dumping, The Modules of XEmacs, Top
 @chapter Allocation of Objects in XEmacs Lisp
 @cindex allocation of objects in XEmacs Lisp
 @cindex objects in XEmacs Lisp, allocation of
 @cindex Lisp objects, allocation of in XEmacs
 
 @menu
-* Introduction to Allocation::
-* Garbage Collection::
-* GCPROing::
-* Garbage Collection - Step by Step::
-* Integers and Characters::
-* Allocation from Frob Blocks::
-* lrecords::
-* Low-level allocation::
-* Cons::
-* Vector::
-* Bit Vector::
-* Symbol::
-* Marker::
-* String::
-* Compiled Function::
+* Introduction to Allocation::  
+* Garbage Collection::          
+* GCPROing::                    
+* Garbage Collection - Step by Step::  
+* Integers and Characters::     
+* Allocation from Frob Blocks::  
+* lrecords::                    
+* Low-level allocation::        
+* Cons::                        
+* Vector::                      
+* Bit Vector::                  
+* Symbol::                      
+* Marker::                      
+* String::                      
+* Compiled Function::           
 @end menu
 
-@node Introduction to Allocation
+@node Introduction to Allocation, Garbage Collection, Allocation of Objects in XEmacs Lisp, Allocation of Objects in XEmacs Lisp
 @section Introduction to Allocation
 @cindex allocation, introduction to
 
@@ -6052,14 +5876,14 @@
 in directly-tagged fashion.
 
 
-@node Garbage Collection
+@node Garbage Collection, GCPROing, Introduction to Allocation, Allocation of Objects in XEmacs Lisp
 @section Garbage Collection
 @cindex garbage collection
 
 @cindex mark and sweep
   Garbage collection is simple in theory but tricky to implement.
 Emacs Lisp uses the oldest garbage collection method, called
-@dfn{mark and sweep}.  Garbage collection begins by starting with
+@dfn{mark and sweep}.  Garbage collection begins by starting with    
 all accessible locations (i.e. all variables and other slots where
 Lisp objects might occur) and recursively traversing all objects
 accessible from those slots, marking each one that is found.
@@ -6076,7 +5900,7 @@
 garbage collection (according to @code{gc-cons-threshold}).
 
 
-@node GCPROing
+@node GCPROing, Garbage Collection - Step by Step, Garbage Collection, Allocation of Objects in XEmacs Lisp
 @section @code{GCPRO}ing
 @cindex @code{GCPRO}ing
 @cindex garbage collection protection
@@ -6251,22 +6075,22 @@
 it obviates the need for @code{GCPRO}ing, and allows garbage collection
 to happen at any point at all, such as during object allocation.
 
-@node Garbage Collection - Step by Step
+@node Garbage Collection - Step by Step, Integers and Characters, GCPROing, Allocation of Objects in XEmacs Lisp
 @section Garbage Collection - Step by Step
 @cindex garbage collection - step by step
 
 @menu
-* Invocation::
-* garbage_collect_1::
-* mark_object::
-* gc_sweep::
-* sweep_lcrecords_1::
-* compact_string_chars::
-* sweep_strings::
-* sweep_bit_vectors_1::
+* Invocation::                  
+* garbage_collect_1::           
+* mark_object::                 
+* gc_sweep::                    
+* sweep_lcrecords_1::           
+* compact_string_chars::        
+* sweep_strings::               
+* sweep_bit_vectors_1::         
 @end menu
 
-@node Invocation
+@node Invocation, garbage_collect_1, Garbage Collection - Step by Step, Garbage Collection - Step by Step
 @subsection Invocation
 @cindex garbage collection, invocation
 
@@ -6326,7 +6150,7 @@
 for example the ones raised by every @code{QUIT}-macro triggered after
 pressing Ctrl-g.
 
-@node garbage_collect_1
+@node garbage_collect_1, mark_object, Invocation, Garbage Collection - Step by Step
 @subsection @code{garbage_collect_1}
 @cindex @code{garbage_collect_1}
 
@@ -6516,7 +6340,7 @@
 and exit.
 @end enumerate
 
-@node mark_object
+@node mark_object, gc_sweep, garbage_collect_1, Garbage Collection - Step by Step
 @subsection @code{mark_object}
 @cindex @code{mark_object}
 
@@ -6550,7 +6374,7 @@
 In case another object was returned, as mentioned before, we reiterate
 the whole @code{mark_object} process beginning with this next object.
 
-@node gc_sweep
+@node gc_sweep, sweep_lcrecords_1, mark_object, Garbage Collection - Step by Step
 @subsection @code{gc_sweep}
 @cindex @code{gc_sweep}
 
@@ -6645,7 +6469,7 @@
 @code{xfree}) and the free list state is set to the state it had before
 handling this block.
 
-@node sweep_lcrecords_1
+@node sweep_lcrecords_1, compact_string_chars, gc_sweep, Garbage Collection - Step by Step
 @subsection @code{sweep_lcrecords_1}
 @cindex @code{sweep_lcrecords_1}
 
@@ -6666,7 +6490,7 @@
 @code{xfree}. During this loop, the lcrecord statistics are kept up to
 date by calling @code{tick_lcrecord_stats} with the right arguments,
 
-@node compact_string_chars
+@node compact_string_chars, sweep_strings, sweep_lcrecords_1, Garbage Collection - Step by Step
 @subsection @code{compact_string_chars}
 @cindex @code{compact_string_chars}
 
@@ -6712,7 +6536,7 @@
 i.e. @code{to_block}, and all remaining blocks (we know that they just
 carry garbage) are explicitly @code{xfree}d.
 
-@node sweep_strings
+@node sweep_strings, sweep_bit_vectors_1, compact_string_chars, Garbage Collection - Step by Step
 @subsection @code{sweep_strings}
 @cindex @code{sweep_strings}
 
@@ -6733,7 +6557,7 @@
 therefore it was @code{malloc}ed separately, we know also @code{xfree}
 it explicitly.
 
-@node sweep_bit_vectors_1
+@node sweep_bit_vectors_1,  , sweep_strings, Garbage Collection - Step by Step
 @subsection @code{sweep_bit_vectors_1}
 @cindex @code{sweep_bit_vectors_1}
 
@@ -6747,7 +6571,7 @@
 In addition, the bookkeeping information used for garbage
 collector's output purposes is updated.
 
-@node Integers and Characters
+@node Integers and Characters, Allocation from Frob Blocks, Garbage Collection - Step by Step, Allocation of Objects in XEmacs Lisp
 @section Integers and Characters
 @cindex integers and characters
 @cindex characters, integers and
@@ -6763,7 +6587,7 @@
 are too big; i.e. you won't get the value you expected but the tag bits
 will at least be correct.
 
-@node Allocation from Frob Blocks
+@node Allocation from Frob Blocks, lrecords, Integers and Characters, Allocation of Objects in XEmacs Lisp
 @section Allocation from Frob Blocks
 @cindex allocation from frob blocks
 @cindex frob blocks, allocation from
@@ -6792,7 +6616,7 @@
 none. (There are actually two versions of these macros, one of which is
 more defensive but less efficient and is used for error-checking.)
 
-@node lrecords
+@node lrecords, Low-level allocation, Allocation from Frob Blocks, Allocation of Objects in XEmacs Lisp
 @section lrecords
 @cindex lrecords
 
@@ -7032,7 +6856,7 @@
 For an example, see the methods for window configurations and opaques.
 @end enumerate
 
-@node Low-level allocation
+@node Low-level allocation, Cons, lrecords, Allocation of Objects in XEmacs Lisp
 @section Low-level allocation
 @cindex low-level allocation
 @cindex allocation, low-level
@@ -7105,7 +6929,7 @@
 statistics on how much memory is allocated, so that garbage-collection
 can be invoked when the threshold is reached.
 
-@node Cons
+@node Cons, Vector, Low-level allocation, Allocation of Objects in XEmacs Lisp
 @section Cons
 @cindex cons
 
@@ -7120,7 +6944,7 @@
 If you mess this up, you will get BADLY BURNED, and it has happened
 before.
 
-@node Vector
+@node Vector, Bit Vector, Cons, Allocation of Objects in XEmacs Lisp
 @section Vector
 @cindex vector
 
@@ -7132,7 +6956,7 @@
 is actually @code{malloc()}ed with the right size, however, and access
 to any element through the @code{contents} array works fine.
 
-@node Bit Vector
+@node Bit Vector, Symbol, Vector, Allocation of Objects in XEmacs Lisp
 @section Bit Vector
 @cindex bit vector
 @cindex vector, bit
@@ -7144,7 +6968,7 @@
 tag field in bit vector Lisp words is ``lrecord'' rather than
 ``vector''.)
 
-@node Symbol
+@node Symbol, Marker, Bit Vector, Allocation of Objects in XEmacs Lisp
 @section Symbol
 @cindex symbol
 
@@ -7154,7 +6978,7 @@
 Remember that @code{intern} looks up a symbol in an obarray, creating
 one if necessary.
 
-@node Marker
+@node Marker, String, Symbol, Allocation of Objects in XEmacs Lisp
 @section Marker
 @cindex marker
 
@@ -7166,7 +6990,7 @@
 markers from a buffer.) Markers are removed from a buffer in
 the finalize stage, in @code{ADDITIONAL_FREE_marker()}.
 
-@node String
+@node String, Compiled Function, Marker, Allocation of Objects in XEmacs Lisp
 @section String
 @cindex string
 
@@ -7228,7 +7052,7 @@
 The string compactor recognizes this special 0xFFFFFFFF marker and
 handles it correctly.
 
-@node Compiled Function
+@node Compiled Function,  , String, Allocation of Objects in XEmacs Lisp
 @section Compiled Function
 @cindex compiled function
 @cindex function, compiled
@@ -7240,8 +7064,18 @@
 @chapter Dumping
 @cindex dumping
 
-@section What is dumping and its justification
-@cindex dumping and its justification, what is
+@menu
+* Dumping Justification::       
+* Overview::                    
+* Data descriptions::           
+* Dumping phase::               
+* Reloading phase::             
+* Remaining issues::            
+@end menu
+
+@node Dumping Justification, Overview, Dumping, Dumping
+@section Dumping Justification
+@cindex dumping, justification
 
 The C code of XEmacs is just a Lisp engine with a lot of built-in
 primitives useful for writing an editor.  The editor itself is written
@@ -7263,9 +7097,9 @@
 system-specific process, quite error-prone, and which interferes with a
 lot of system libraries (like malloc).  It is even getting worse
 nowadays with libraries using constructors which are automatically
-called when the program is started (even before main()) which tend to
+called when the program is started (even before @code{main()}) which tend to
 crash when they are called multiple times, once before dumping and once
-after (IRIX 6.x libz.so pulls in some C++ image libraries thru
+after (IRIX 6.x @file{libz.so} pulls in some C++ image libraries thru
 dependencies which have this problem).  Writing the dumper is also one
 of the most difficult parts of porting XEmacs to a new operating system.
 Basically, `dumping' is an operation that is just not officially
@@ -7276,15 +7110,7 @@
 a small number of files, the fully initialized lisp part of the editor,
 without any system-specific hacks.
 
-@menu
-* Overview::
-* Data descriptions::
-* Dumping phase::
-* Reloading phase::
-* Remaining issues::
-@end menu
-
-@node Overview
+@node Overview, Data descriptions, Dumping Justification, Dumping
 @section Overview
 @cindex dumping overview
 
@@ -7294,7 +7120,7 @@
 @item
 At dump time, write all initialized, non-quickly-rebuildable data to a
 file [Note: currently named @file{xemacs.dmp}, but the name will
-change], along with all informations needed for the reloading.
+change], along with all information needed for the reloading.
 
 @item
 When starting xemacs, reload the dump file, relocate it to its new
@@ -7305,24 +7131,25 @@
 Note: As of 21.5.18, the dump file has been moved inside of the
 executable, although there are still problems with this on some systems.
 
-@node Data descriptions
+@node Data descriptions, Dumping phase, Overview, Dumping
 @section Data descriptions
 @cindex dumping data descriptions
 
-The more complex task of the dumper is to be able to write lisp objects
-(lrecords) and C structs to disk and reload them at a different address,
+The more complex task of the dumper is to be able to write memory blocks
+on the heap (lisp objects, i.e. lrecords, and C-allocated memory, such
+as structs and arrays) to disk and reload them at a different address,
 updating all the pointers they include in the process.  This is done by
 using external data descriptions that give information about the layout
-of the structures in memory.
+of the blocks in memory.
 
 The specification of these descriptions is in lrecord.h.  A description
-of an lrecord is an array of struct lrecord_description.  Each of these
-structs include a type, an offset in the structure and some optional
+of an lrecord is an array of struct memory_description.  Each of these
+structs include a type, an offset in the block and some optional
 parameters depending on the type.  For instance, here is the string
 description:
 
 @example
-static const struct lrecord_description string_description[] = @{
+static const struct memory_description string_description[] = @{
   @{ XD_BYTECOUNT,         offsetof (Lisp_String, size) @},
   @{ XD_OPAQUE_DATA_PTR,   offsetof (Lisp_String, data), XD_INDIRECT(0, 1) @},
   @{ XD_LISP_OBJECT,       offsetof (Lisp_String, plist) @},
@@ -7339,52 +7166,53 @@
 structure".  @code{XD_END} then ends the description.
 
 This gives us all the information we need to move around what is pointed
-to by a structure (C or lrecord) and, by transitivity, everything that
-it points to.  The only missing information for dumping is the size of
-the structure.  For lrecords, this is part of the
-lrecord_implementation, so we don't need to duplicate it.  For C
-structures we use a struct struct_description, which includes a size
-field and a pointer to an associated array of lrecord_description.
-
-@node Dumping phase
+to by a memory block (C or lrecord) and, by transitivity, everything
+that it points to.  The only missing information for dumping is the size
+of the block.  For lrecords, this is part of the
+lrecord_implementation, so we don't need to duplicate it.  For C blocks
+we use a struct sized_memory_description, which includes a size field
+and a pointer to an associated array of memory_description.
+
+@node Dumping phase, Reloading phase, Data descriptions, Dumping
 @section Dumping phase
 @cindex dumping phase
 
-Dumping is done by calling the function pdump() (in dumper.c) which is
-invoked from Fdump_emacs (in emacs.c).  This function performs a number
+Dumping is done by calling the function @code{pdump()} (in @file{dumper.c}) which is
+invoked from Fdump_emacs (in @file{emacs.c}).  This function performs a number
 of tasks.
 
 @menu
-* Object inventory::
-* Address allocation::
-* The header::
-* Data dumping::
-* Pointers dumping::
+* Object inventory::            
+* Address allocation::          
+* The header::                  
+* Data dumping::                
+* Pointers dumping::            
 @end menu
 
-@node Object inventory
+@node Object inventory, Address allocation, Dumping phase, Dumping phase
 @subsection Object inventory
 @cindex dumping object inventory
+@cindex memory blocks
 
 The first task is to build the list of the objects to dump.  This
 includes:
 
 @itemize @bullet
 @item lisp objects
-@item C structures
-@end itemize
-
-We end up with one @code{pdump_entry_list_elmt} per object group (arrays
+@item other memory blocks (C structures, arrays. etc)
+@end itemize
+
+We end up with one @code{pdump_block_list_elt} per object group (arrays
 of C structs are kept together) which includes a pointer to the first
 object of the group, the per-object size and the count of objects in the
 group, along with some other information which is initialized later.
 
-These entries are linked together in @code{pdump_entry_list} structures
+These entries are linked together in @code{pdump_block_list} structures
 and can be enumerated thru either:
 
 @enumerate
 @item
-the @code{pdump_object_table}, an array of @code{pdump_entry_list}, one
+the @code{pdump_object_table}, an array of @code{pdump_block_list}, one
 per lrecord type, indexed by type number.
 
 @item
@@ -7392,9 +7220,9 @@
 not include pointers, and hence does not need descriptions.
 
 @item
-the @code{pdump_struct_table}, which is a vector of
-@code{struct_description}/@code{pdump_entry_list} pairs, used for
-non-opaque C structures.
+the @code{pdump_desc_table}, which is a vector of
+@code{memory_description}/@code{pdump_block_list} pairs, used for
+non-opaque C memory blocks.
 @end enumerate
 
 This uses a marking strategy similar to the garbage collector.  Some
@@ -7402,24 +7230,25 @@
 
 @enumerate
 @item
-We do not use the mark bit (which does not exist for C structures
+We do not use the mark bit (which does not exist for generic memory blocks
 anyway); we use a big hash table instead.
 
 @item
 We do not use the mark function of lrecords but instead rely on the
 external descriptions.  This happens essentially because we need to
-follow pointers to C structures and opaque data in addition to
+follow pointers to generic memory blocks and opaque data in addition to
 Lisp_Object members.
 @end enumerate
 
-This is done by @code{pdump_register_object()}, which handles Lisp_Object
-variables, and @code{pdump_register_struct()} which handles C structures,
-which both delegate the description management to @code{pdump_register_sub()}.
-
-The hash table doubles as a map object to pdump_entry_list_elmt (i.e.
-allows us to look up a pdump_entry_list_elmt with the object it points
-to).  Entries are added with @code{pdump_add_entry()} and looked up with
-@code{pdump_get_entry()}.  There is no need for entry removal.  The hash
+This is done by @code{pdump_register_object()}, which handles
+Lisp_Object variables, and @code{pdump_register_block()} which handles
+generic memory blocks (C structures, arrays, etc.), which both delegate
+the description management to @code{pdump_register_sub()}.
+
+The hash table doubles as a map object to pdump_block_list_elmt (i.e.
+allows us to look up a pdump_block_list_elmt with the object it points
+to).  Entries are added with @code{pdump_add_block()} and looked up with
+@code{pdump_get_block()}.  There is no need for entry removal.  The hash
 value is computed quite simply from the object pointer by
 @code{pdump_make_hash()}.
 
@@ -7427,17 +7256,35 @@
 
 @enumerate
 @item
-the @code{staticpro}'ed variables (there is a special @code{staticpro_nodump()}
-call for protected variables we do not want to dump).
-
-@item
-the variables registered via @code{dump_add_root_object}
+the @code{staticpro}'ed variables (there is a special
+@code{staticpro_nodump()} call for protected variables we do not want to
+dump).
+
+@item
+the Lisp_Object variables registered via @code{dump_add_root_lisp_object}
 (@code{staticpro()} is equivalent to @code{staticpro_nodump()} +
-@code{dump_add_root_object()}).
-
-@item
-the variables registered via @code{dump_add_root_struct_ptr}, each of
-which points to a C structure.
+@code{dump_add_root_lisp_object()}).
+
+@item
+the data-segment memory blocks registered via @code{dump_add_root_block}
+(for blocks with relocatable pointers), or @code{dump_add_opaque} (for
+"opaque" blocks with no relocatable pointers; this is just a shortcut
+for calling @code{dump_add_root_block} with a NULL description).
+
+@item
+the pointer variables registered via @code{dump_add_root_block_ptr},
+each of which points to a block of heap memory (generally a C structure
+or array).  Note that @code{dump_add_root_block_ptr} is not technically
+necessary, as a pointer variable can be seen as a special case of a
+data-segment memory block and registered using
+@code{dump_add_root_block}.  Doing it this way, however, would require
+another level of static structures declared.  Since pointer variables
+are quite common, @code{dump_add_root_block_ptr} is provided for
+convenience.  Note also that internally we have to treat it separately
+from @code{dump_add_root_block} rather than writing the former as a call
+to the latter, since we don't have support for creating and using memory
+descriptions on the fly -- they must all be statically declared in the
+data-segment.
 @end enumerate
 
 This does not include the GCPRO'ed variables, the specbinds, the
@@ -7449,7 +7296,7 @@
 non-weak equivalent (without changing their type, of course).  This has
 not yet been a problem.
 
-@node Address allocation
+@node Address allocation, The header, Object inventory, Dumping phase
 @subsection Address allocation
 @cindex dumping address allocation
 
@@ -7478,7 +7325,7 @@
 Hence, for each lrecord type, C struct type or opaque data block the
 alignment requirement is computed as a power of two, with a minimum of
 2^2 for lrecords.  @code{pdump_scan_by_alignment()} then scans all the
-@code{pdump_entry_list_elmt}'s, the ones with the highest requirements
+@code{pdump_block_list_elmt}'s, the ones with the highest requirements
 first.  This ensures the best packing.
 
 The maximum alignment requirement we take into account is 2^8.
@@ -7487,7 +7334,7 @@
 starting at offset 256 (this leaves room for the header and keeps the
 alignments happy).
 
-@node The header
+@node The header, Data dumping, Address allocation, Dumping phase
 @subsection The header
 @cindex dumping, the header
 
@@ -7497,7 +7344,7 @@
 post-reload relocation, is set to 0.  It then seeks to offset 256 (base
 offset for the objects).
 
-@node Data dumping
+@node Data dumping, Pointers dumping, The header, Dumping phase
 @subsection Data dumping
 @cindex data dumping
 @cindex dumping, data
@@ -7511,7 +7358,7 @@
 are ensured that the object is always written at the offset in the file
 allocated in step Address Allocation.
 
-@node Pointers dumping
+@node Pointers dumping,  , Data dumping, Dumping phase
 @subsection Pointers dumping
 @cindex pointers dumping
 @cindex dumping, pointers
@@ -7521,7 +7368,7 @@
 
 @enumerate
 @item
-the pdump_root_struct_ptrs dynarr
+the pdump_root_block_ptrs dynarr
 @item
 the pdump_opaques dynarr
 @item
@@ -7546,11 +7393,11 @@
 
 Some very important information like the @code{staticpros} and
 @code{lrecord_implementations_table} are handled indirectly using
-@code{dump_add_opaque} or @code{dump_add_root_struct_ptr}.
+@code{dump_add_opaque} or @code{dump_add_root_block_ptr}.
 
 This is the end of the dumping part.
 
-@node Reloading phase
+@node Reloading phase, Remaining issues, Dumping phase, Dumping
 @section Reloading phase
 @cindex reloading phase
 @cindex dumping, reloading phase
@@ -7574,10 +7421,10 @@
 The memory contents are restored in the obvious and trivial way.
 
 
-@subsection Putting back the pdump_root_struct_ptrs
-@cindex dumping, putting back the pdump_root_struct_ptrs
-
-The variables pointed to by pdump_root_struct_ptrs in the dump phase are
+@subsection Putting back the pdump_root_block_ptrs
+@cindex dumping, putting back the pdump_root_block_ptrs
+
+The variables pointed to by pdump_root_block_ptrs in the dump phase are
 reset to the right relocated object addresses.
 
 
@@ -7592,7 +7439,7 @@
 @subsection Putting back the pdump_root_objects and pdump_weak_object_chains
 @cindex dumping, putting back the pdump_root_objects and pdump_weak_object_chains
 
-Same as Putting back the pdump_root_struct_ptrs.
+Same as Putting back the pdump_root_block_ptrs.
 
 
 @subsection Reorganize the hash tables
@@ -7602,7 +7449,7 @@
 address-dependent, their layout is now wrong.  So we go through each of
 them and have them resorted by calling @code{pdump_reorganize_hash_table}.
 
-@node Remaining issues
+@node Remaining issues,  , Reloading phase, Dumping
 @section Remaining issues
 @cindex dumping, remaining issues
 
@@ -7624,23 +7471,27 @@
 The DOC file contents should probably end up in the dump file.
 
 
-@node Events and the Event Loop, Evaluation; Stack Frames; Bindings, Dumping, Top
+@node Events and the Event Loop, Asynchronous Events; Quit Checking, Dumping, Top
 @chapter Events and the Event Loop
 @cindex events and the event loop
 @cindex event loop, events and the
 
 @menu
-* Introduction to Events::
-* Main Loop::
-* Specifics of the Event Gathering Mechanism::
-* Specifics About the Emacs Event::
-* The Event Stream Callback Routines::
-* Other Event Loop Functions::
-* Converting Events::
-* Dispatching Events; The Command Builder::
+* Introduction to Events::      
+* Main Loop::                   
+* Specifics of the Event Gathering Mechanism::  
+* Specifics About the Emacs Event::  
+* Event Queues::                
+* Event Stream Callback Routines::  
+* Other Event Loop Functions::  
+* Stream Pairs::                
+* Converting Events::           
+* Dispatching Events; The Command Builder::  
+* Focus Handling::              
+* Editor-Level Control Flow Modules::  
 @end menu
 
-@node Introduction to Events
+@node Introduction to Events, Main Loop, Events and the Event Loop, Events and the Event Loop
 @section Introduction to Events
 @cindex events, introduction to
 
@@ -7680,7 +7531,7 @@
   Emacs events are documented in @file{events.h}; I'll discuss them
 later.
 
-@node Main Loop
+@node Main Loop, Specifics of the Event Gathering Mechanism, Introduction to Events, Events and the Event Loop
 @section Main Loop
 @cindex main loop
 @cindex events, main loop
@@ -7749,7 +7600,7 @@
 invoking @code{top_level_1()}, just like when it invokes
 @code{command_loop_2()}.
 
-@node Specifics of the Event Gathering Mechanism
+@node Specifics of the Event Gathering Mechanism, Specifics About the Emacs Event, Main Loop, Events and the Event Loop
 @section Specifics of the Event Gathering Mechanism
 @cindex event gathering mechanism, specifics of the
 
@@ -7780,7 +7631,7 @@
       ------>-----------<----------------<----------------
                   |
                   |
-                  | [collected using select() in emacs_tty_next_event()
+                  | [collected using @code{select()} in @code{emacs_tty_next_event()}
                   |  and converted to the appropriate Emacs event]
                   |
                   |
@@ -7790,15 +7641,15 @@
                   |
                   |
 was there     if not, call
-a SIGINT?  emacs_tty_next_event()
+a SIGINT?  @code{emacs_tty_next_event()}
     |             |
     |             |
     |             |
     V             V
     --->------<----
            |
-           |     [collected in event_stream_next_event();
-           |      SIGINT is converted using maybe_read_quit_event()]
+           |     [collected in @code{event_stream_next_event()};
+           |      SIGINT is converted using @code{maybe_read_quit_event()}]
            V
          Emacs
          event
@@ -7810,7 +7661,7 @@
      command event queue                                    |
                                                if not from command
   (contains events that were                   event queue, call
-  read earlier but not processed,              event_stream_next_event()
+  read earlier but not processed,              @code{event_stream_next_event()}
   typically when waiting in a                               |
   sit-for, sleep-for, etc. for                              |
  a particular event to be received)                         |
@@ -7820,27 +7671,27 @@
                ---->------------------------------------<----
                                                |
                                                | [collected in
-                                               |  next_event_internal()]
+                                               |  @code{next_event_internal()}]
                                                |
  unread-     unread-       event from          |
  command-    command-       keyboard       else, call
- events      event           macro      next_event_internal()
+ events      event           macro      @code{next_event_internal()}
    |           |               |               |
    |           |               |               |
    |           |               |               |
    V           V               V               V
    --------->----------------------<------------
                      |
-                     |      [collected in `next-event', which may loop
+                     |      [collected in @code{next-event}, which may loop
                      |       more than once if the event it gets is on
                      |       a dead frame, device, etc.]
                      |
                      |
                      V
             feed into top-level event loop,
-            which repeatedly calls `next-event'
+            which repeatedly calls @code{next-event}
             and then dispatches the event
-            using `dispatch-event'
+            using @code{dispatch-event}
 @end example
 
 Notice the separation between TTY-specific and generic event mechanism.
@@ -7889,10 +7740,10 @@
   V       V       V       V       V        V       V          V
   --->----------------------------------------<---------<------
        |              |               |
-       |              |               |[collected using select() in
-       |              |               | _XtWaitForSomething(), called
-       |              |               | from XtAppProcessEvent(), called
-       |              |               | in emacs_Xt_next_event();
+       |              |               |[collected using @code{select()} in
+       |              |               | @code{_XtWaitForSomething()}, called
+       |              |               | from @code{XtAppProcessEvent()}, called
+       |              |               | in @code{emacs_Xt_next_event()};
        |              |               | dispatched to various callbacks]
        |              |               |
        |              |               |
@@ -7916,7 +7767,7 @@
        -->----------<--               |
               |                       |
               |                       |
-           dispatch             Xt_what_callback()
+           dispatch             @code{Xt_what_callback()}
            event                  sets flags
            queue                      |
               |                       |
@@ -7927,7 +7778,7 @@
                    |
                    |
                    |     [collected and converted as appropriate in
-                   |            emacs_Xt_next_event()]
+                   |            @code{emacs_Xt_next_event()}]
                    |
                    |
                    V          (above this line is Xt-specific)
@@ -7936,15 +7787,15 @@
                    |
                    |
 was there      if not, call
-a SIGINT?   emacs_Xt_next_event()
+a SIGINT?   @code{emacs_Xt_next_event()}
     |              |
     |              |
     |              |
     V              V
     --->-------<----
            |
-           |        [collected in event_stream_next_event();
-           |         SIGINT is converted using maybe_read_quit_event()]
+           |        [collected in @code{event_stream_next_event()};
+           |         SIGINT is converted using @code{maybe_read_quit_event()}]
            V
          Emacs
          event
@@ -7956,7 +7807,7 @@
      command event queue                                    |
                                               if not from command
   (contains events that were                  event queue, call
-  read earlier but not processed,             event_stream_next_event()
+  read earlier but not processed,             @code{event_stream_next_event()}
   typically when waiting in a                               |
   sit-for, sleep-for, etc. for                              |
  a particular event to be received)                         |
@@ -7966,39 +7817,205 @@
                ---->----------------------------------<------
                                                |
                                                | [collected in
-                                               |  next_event_internal()]
+                                               |  @code{next_event_internal()}]
                                                |
  unread-     unread-       event from          |
  command-    command-       keyboard       else, call
- events      event           macro      next_event_internal()
+ events      event           macro      @code{next_event_internal()}
    |           |               |               |
    |           |               |               |
    |           |               |               |
    V           V               V               V
    --------->----------------------<------------
                      |
-                     |      [collected in `next-event', which may loop
+                     |      [collected in @code{next-event}, which may loop
                      |       more than once if the event it gets is on
                      |       a dead frame, device, etc.]
                      |
                      |
                      V
             feed into top-level event loop,
-            which repeatedly calls `next-event'
+            which repeatedly calls @code{next-event}
             and then dispatches the event
-            using `dispatch-event'
-@end example
-
-@node Specifics About the Emacs Event
+            using @code{dispatch-event}
+@end example
+
+@node Specifics About the Emacs Event, Event Queues, Specifics of the Event Gathering Mechanism, Events and the Event Loop
 @section Specifics About the Emacs Event
 @cindex event, specifics about the Lisp object
 
-@node The Event Stream Callback Routines
-@section The Event Stream Callback Routines
-@cindex event stream callback routines, the
-@cindex callback routines, the event stream
-
-@node Other Event Loop Functions
+@node Event Queues, Event Stream Callback Routines, Specifics About the Emacs Event, Events and the Event Loop
+@section Event Queues
+@cindex event queues
+@cindex queues, event
+
+There are two event queues here -- the command event queue (#### which
+should be called "deferred event queue" and is in my glyph ws) and the
+dispatch event queue. (MS Windows actually has an extra dispatch queue
+for non-user events and uses the generic one only for user events.  This
+is because user and non-user events in Windows come through the same
+place -- the window procedure -- but under X, it's possible to
+selectively process events such that we take all the user events before
+the non-user ones. #### In fact, given the way we now drain the queue,
+we might need two separate queues, like under Windows.  Need to think
+carefully exactly how this works, and should certainly generalize the
+two different queues.
+
+The dispatch queue (which used to occur duplicated inside of each event
+implementation) is used for events that have been read from the
+window-system event queue(s) and not yet process by
+@code{next_event_internal()}.  It exists for two reasons: (1) because in many
+implementations, events often come from the window system by way of
+callbacks, and need to push the event to be returned onto a queue; (2)
+in order to handle QUIT in a guaranteed correct fashion without
+resorting to weird implementation-specific hacks that may or may not
+work well, we need to drain the window-system event queues and then look
+through to see if there's an event matching quit-char (usually ^G).  the
+drained events need to go onto a queue. (There are other, similar cases
+where we need to drain the pending events so we can look ahead -- for
+example, checking for pending expose events under X to avoid excessive
+server activity.)
+
+The command event queue is used @strong{AFTER} an event has been read from
+@code{next_event_internal()}, when it needs to be pushed back.  This
+includes, for example, @code{accept-process-output}, @code{sleep-for}
+and @code{wait_delaying_user_input()}.  Eval events and the like,
+generated by @code{enqueue-eval-event},
+@code{enqueue_magic_eval_event()}, etc. are also pushed onto this queue.
+Some events generated by callbacks are also pushed onto this queue, ####
+although maybe shouldn't be.
+
+The command queue takes precedence over the dispatch queue.
+
+#### It is worth investigating to see whether both queues are really
+needed, and how exactly they should be used.  @code{enqueue-eval-event},
+for example, could certainly push onto the dispatch queue, and all
+callbacks maybe should.  @code{wait_delaying_user_input()} seems to need
+both queues, since it can take events from the dispatch queue and push
+them onto the command queue; but it perhaps could be rewritten to avoid
+this.  #### In general we need to review the handling of these two
+queues, figure out exactly what ought to be happening, and document it.
+
+
+@node Event Stream Callback Routines, Other Event Loop Functions, Event Queues, Events and the Event Loop
+@section Event Stream Callback Routines
+@cindex event stream callback routines
+@cindex callback routines, event stream
+
+There is one object called an event_stream.  This object contains
+callback functions for doing the window-system-dependent operations
+that XEmacs requires.
+
+If XEmacs is compiled with support for X11 and the X Toolkit, then this
+event_stream structure will contain functions that can cope with input
+on XEmacs windows on multiple displays, as well as input from dumb tty
+frames.
+
+If it is desired to have XEmacs able to open frames on the displays of
+multiple heterogeneous machines, X11 and SunView, or X11 and NeXT, for
+example, then it will be necessary to construct an event_stream structure
+that can cope with the given types.  Currently, the only implemented
+event_streams are for dumb-ttys, and for X11 plus dumb-ttys,
+and for mswindows.
+
+To implement this for one window system is relatively simple.
+To implement this for multiple window systems is trickier and may
+not be possible in all situations, but it's been done for X and TTY.
+
+Note that these callbacks are @strong{NOT} console methods; that's because
+the routines are not specific to a particular console type but must
+be able to simultaneously cope with all allowable console types.
+
+The slots of the event_stream structure:
+
+@table @code
+@item next_event_cb
+A function which fills in an XEmacs_event structure with the next event
+available.  If there is no event available, then this should block.
+
+IMPORTANT: timer events and especially process events *must not* be
+returned if there are events of other types available; otherwise you can
+end up with an infinite loop in @code{Fdiscard_input()}.
+
+@item event_pending_cb
+A function which says whether there are events to be read.  If called
+with an argument of 0, then this should say whether calling the
+@code{next_event_cb} will block.  If called with a non-zero argument,
+then this should say whether there are that many user-generated events
+pending (that is, keypresses, mouse-clicks, dialog-box selection events,
+etc.). (This is used for redisplay optimization, among other things.)
+The difference is that the former includes process events and timer
+events, but the latter doesn't.
+
+If this function is not sure whether there are events to be read, it
+@strong{must} return 0.  Otherwise various undesirable effects will
+occur, such as redisplay not occurring until the next event occurs.
+
+@item handle_magic_event_cb
+XEmacs calls this with an event structure which contains window-system
+dependent information that XEmacs doesn't need to know about, but which
+must happen in order.  If the @code{next_event_cb} never returns an
+event of type "magic", this will never be used.
+
+@item format_magic_event_cb
+Called with a magic event; print a representation of the innards of the
+event to @var{PSTREAM}.
+
+@item compare_magic_event_cb
+Called with two magic events; return non-zero if the innards of the two
+are equal, zero otherwise.
+
+@item hash_magic_event_cb
+Called with a magic event; return a hash of the innards of the event.
+
+@item add_timeout_cb
+Called with an @var{EMACS_TIME}, the absolute time at which a wakeup event
+should be generated; and a void *, which is an arbitrary value that will
+be returned in the timeout event.  The timeouts generated by this
+function should be one-shots: they fire once and then disappear.  This
+callback should return an int id-number which uniquely identifies this
+wakeup.  If an implementation doesn't have microseconds or millisecond
+granularity, it should round up to the closest value it can deal with.
+
+@item remove_timeout_cb
+Called with an int, the id number of a wakeup to discard.  This id
+number must have been returned by the @code{add_timeout_cb}.  If the given
+wakeup has already expired, this should do nothing.
+
+@item select_process_cb
+@item unselect_process_cb
+These callbacks tell the underlying implementation to add or remove a
+file descriptor from the list of fds which are polled for
+inferior-process input.  When input becomes available on the given
+process connection, an event of type "process" should be generated.
+
+@item select_console_cb
+@item unselect_console_cb
+These callbacks tell the underlying implementation to add or remove a
+console from the list of consoles which are polled for user-input.
+
+@item select_device_cb
+@item unselect_device_cb
+These callbacks are used by Unixoid event loops (those that use @code{select()}
+and file descriptors and have a separate input fd per device).
+
+@item create_io_streams_cb
+@item delete_io_streams_cb
+These callbacks are called by process code to create the input and
+output lstreams which are used for subprocess I/O.
+
+@item quitp_cb
+A handler function called from the @code{QUIT} macro which should check
+whether the quit character has been typed.  On systems with SIGIO, this
+will not be called unless the @code{sigio_happened} flag is true (it is set
+from the SIGIO handler).
+@end table
+
+XEmacs has its own event structures, which are distinct from the event
+structures used by X or any other window system.  It is the job of the
+event_stream layer to translate to this format.
+
+@node Other Event Loop Functions, Stream Pairs, Event Stream Callback Routines, Events and the Event Loop
 @section Other Event Loop Functions
 @cindex event loop functions, other
 
@@ -8021,7 +8038,56 @@
 the right kind of input method support, it is possible for (read-char)
 to return a Kanji character.
 
-@node Converting Events
+@node Stream Pairs, Converting Events, Other Event Loop Functions, Events and the Event Loop
+@section Stream Pairs
+@cindex stream pairs
+@cindex pairs, stream
+
+Since there are many possible processes/event loop combinations, the
+event code is responsible for creating an appropriate lstream type. The
+process implementation does not care about that implementation.
+
+The Create stream pair function is passed two void* values, which
+identify process-dependent 'handles'. The process implementation uses
+these handles to communicate with child processes. The function must be
+prepared to receive handle types of any process implementation. Since
+only one process implementation exists in a particular XEmacs
+configuration, preprocessing is a means of compiling in the support for
+the code which deals with particular handle types.
+
+For example, a unixoid type loop, which relies on file descriptors, may be
+asked to create a pair of streams by a unix-style process implementation.
+In this case, the handles passed are unix file descriptors, and the code
+may deal with these directly. Although, the same code may be used on Win32
+system with X-Windows. In this case, Win32 process implementation passes
+handles of type HANDLE, and the @code{create_io_streams} function must call
+appropriate function to get file descriptors given HANDLEs, so that these
+descriptors may be passed to @code{XtAddInput}.
+
+The handle given may have special denying value, in which case the
+corresponding lstream should not be created.
+
+The return value of the function is a unique stream identifier. It is used
+by processes implementation, in its  platform-independent part. There is
+the get_process_from_usid function, which returns process object given its
+USID. The event stream is responsible for converting its internal handle
+type into USID.
+
+Example is the TTY event stream. When a file descriptor signals input, the
+event loop must determine process to which the input is destined. Thus,
+the implementation uses process input stream file descriptor as USID, by
+simply casting the fd value to USID type.
+
+There are two special USID values. One, @code{USID_ERROR}, indicates
+that the stream pair cannot be created. The second,
+@code{USID_DONTHASH}, indicates that streams are created, but the event
+stream does not wish to be able to find the process by its
+USID. Specifically, if an event stream implementation never calls
+@code{get_process_from_usid}, this value should always be returned, to
+prevent accumulating useless information on USID to process
+relationship.
+
+@node Converting Events, Dispatching Events; The Command Builder, Stream Pairs, Events and the Event Loop
 @section Converting Events
 @cindex converting events
 @cindex events, converting
@@ -8034,7 +8100,7 @@
 between character representation and the split-up event representation
 (keysym plus mod keys).
 
-@node Dispatching Events; The Command Builder
+@node Dispatching Events; The Command Builder, Focus Handling, Converting Events, Events and the Event Loop
 @section Dispatching Events; The Command Builder
 @cindex dispatching events; the command builder
 @cindex events; the command builder, dispatching
@@ -8042,20 +8108,524 @@
 
 Not yet documented.
 
-@node Evaluation; Stack Frames; Bindings, Symbols and Variables, Events and the Event Loop, Top
+@node Focus Handling, Editor-Level Control Flow Modules, Dispatching Events; The Command Builder, Events and the Event Loop
+@section Focus Handling
+@cindex focus handling
+
+Ben's capsule lecture on focus:
+
+In GNU Emacs @code{select-frame} never changes the window-manager frame
+focus.  All it does is change the "selected frame".  This is similar to
+what happens when we call @code{select-device} or @code{select-console}.
+Whenever an event comes in (including a keyboard event), its frame is
+selected; therefore, evaluating @code{select-frame} in @samp{*scratch*}
+won't cause any effects because the next received event (in the same
+frame) will cause a switch back to the frame displaying
+@samp{*scratch*}.
+
+Whenever a focus-change event is received from the window manager, it
+generates a @code{switch-frame} event, which causes the Lisp function
+@code{handle-switch-frame} to get run.  This basically just runs
+@code{select-frame} (see below, however).
+
+In GNU Emacs, if you want to have an operation run when a frame is
+selected, you supply an event binding for @code{switch-frame} (and then
+maybe call @code{handle-switch-frame}, or something ...).
+
+In XEmacs, we @strong{do} change the window-manager frame focus as a
+result of @code{select-frame}, but not until the next time an event is
+received, so that a function that momentarily changes the selected frame
+won't cause WM focus flashing. (#### There's something not quite right
+here; this is causing the wrong-cursor-focus problems that you
+occasionally see.  But the general idea is correct.) This approach is
+winning for people who use the explicit-focus model, but is trickier to
+implement.
+
+We also don't make the @code{switch-frame} event visible but instead have
+@code{select-frame-hook}, which is a better approach.
+
+There is the problem of surrogate minibuffers, where when we enter the
+minibuffer, you essentially want to temporarily switch the WM focus to
+the frame with the minibuffer, and switch it back when you exit the
+minibuffer.
+
+GNU Emacs solves this with the crockish @code{redirect-frame-focus},
+which says "for keyboard events received from FRAME, act like they're
+coming from FOCUS-FRAME".  I think what this means is that, when a
+keyboard event comes in and the event manager is about to select the
+event's frame, if that frame has its focus redirected, the redirected-to
+frame is selected instead.  That way, if you're in a minibufferless
+frame and enter the minibuffer, then all Lisp functions that run see the
+selected frame as the minibuffer's frame rather than the minibufferless
+frame you came from, so that (e.g.) your typing actually appears in the
+minibuffer's frame and things behave sanely.
+
+There's also some weird logic that switches the redirected frame focus
+from one frame to another if Lisp code explicitly calls
+@code{select-frame} (but not if @code{handle-switch-frame} is called),
+and saves and restores the frame focus in window configurations,
+etc. etc.  All of this logic is heavily @code{#if 0}'d, with lots of
+comments saying "No, this approach doesn't seem to work, so I'm trying
+this ...  is it reasonable?  Well, I'm not sure ..." that are a red flag
+indicating crockishness.
+
+Because of our way of doing things, we can avoid all this crock.
+Keyboard events never cause a select-frame (who cares what frame they're
+associated with?  They come from a console, only).  We change the actual
+WM focus to a surrogate minibuffer frame, so we don't have to do any
+internal redirection.  In order to get the focus back, I took the
+approach in @file{minibuf.el} of just checking to see if the frame we moved to
+is still the selected frame, and move back to the old one if so.
+Conceivably we might have to do the weird "tracking" that GNU Emacs does
+when @code{select-frame} is called, but I don't think so.  If the
+selected frame moved from the minibuffer frame, then we just leave it
+there, figuring that someone knows what they're doing.  Because we don't
+have any redirection recorded anywhere, it's safe to do this, and we
+don't end up with unwanted redirection.
+
+@node Editor-Level Control Flow Modules,  , Focus Handling, Events and the Event Loop
+@section Editor-Level Control Flow Modules
+@cindex control flow modules, editor-level
+@cindex modules, editor-level control flow
+
+@example
+@file{event-Xt.c}
+@file{event-msw.c}
+@file{event-stream.c}
+@file{event-tty.c}
+@file{events-mod.h}
+@file{gpmevent.c}
+@file{gpmevent.h}
+@file{events.c}
+@file{events.h}
+@end example
+
+These implement the handling of events (user input and other system
+notifications).
+
+@file{events.c} and @file{events.h} define the @dfn{event} Lisp object
+type and primitives for manipulating it.
+
+@file{event-stream.c} implements the basic functions for working with
+event queues, dispatching an event by looking it up in relevant keymaps
+and such, and handling timeouts; this includes the primitives
+@code{next-event} and @code{dispatch-event}, as well as related
+primitives such as @code{sit-for}, @code{sleep-for}, and
+@code{accept-process-output}. (@file{event-stream.c} is one of the
+hairiest and trickiest modules in XEmacs.  Beware!  You can easily mess
+things up here.)
+
+@file{event-Xt.c} and @file{event-tty.c} implement the low-level
+interfaces onto retrieving events from Xt (the X toolkit) and from TTY's
+(using @code{read()} and @code{select()}), respectively.  The event
+interface enforces a clean separation between the specific code for
+interfacing with the operating system and the generic code for working
+with events, by defining an API of basic, low-level event methods;
+@file{event-Xt.c} and @file{event-tty.c} are two different
+implementations of this API.  To add support for a new operating system
+(e.g. NeXTstep), one merely needs to provide another implementation of
+those API functions.
+
+Note that the choice of whether to use @file{event-Xt.c} or
+@file{event-tty.c} is made at compile time!  Or at the very latest, it
+is made at startup time.  @file{event-Xt.c} handles events for
+@emph{both} X and TTY frames; @file{event-tty.c} is only used when X
+support is not compiled into XEmacs.  The reason for this is that there
+is only one event loop in XEmacs: thus, it needs to be able to receive
+events from all different kinds of frames.
+
+
+
+@example
+@file{keymap.c}
+@file{keymap.h}
+@end example
+
+@file{keymap.c} and @file{keymap.h} define the @dfn{keymap} Lisp object
+type and associated methods and primitives. (Remember that keymaps are
+objects that associate event descriptions with functions to be called to
+``execute'' those events; @code{dispatch-event} looks up events in the
+relevant keymaps.)
+
+
+
+@example
+@file{cmdloop.c}
+@end example
+
+@file{cmdloop.c} contains functions that implement the actual editor
+command loop---i.e. the event loop that cyclically retrieves and
+dispatches events.  This code is also rather tricky, just like
+@file{event-stream.c}.
+
+
+
+@example
+@file{macros.c}
+@file{macros.h}
+@end example
+
+These two modules contain the basic code for defining keyboard macros.
+These functions don't actually do much; most of the code that handles keyboard
+macros is mixed in with the event-handling code in @file{event-stream.c}.
+
+
+
+@example
+@file{minibuf.c}
+@end example
+
+This contains some miscellaneous code related to the minibuffer (most of
+the minibuffer code was moved into Lisp by Richard Mlynarik).  This
+includes the primitives for completion (although filename completion is
+in @file{dired.c}), the lowest-level interface to the minibuffer (if the
+command loop were cleaned up, this too could be in Lisp), and code for
+dealing with the echo area (this, too, was mostly moved into Lisp, and
+the only code remaining is code to call out to Lisp or provide simple
+bootstrapping implementations early in temacs, before the echo-area Lisp
+code is loaded).
+
+
+@node Asynchronous Events; Quit Checking, Evaluation; Stack Frames; Bindings, Events and the Event Loop, Top
+@chapter Asynchronous Events; Quit Checking
+@cindex asynchronous events; quit checking
+@cindex asynchronous events
+
+@menu
+* Signal Handling::             
+* Control-G (Quit) Checking::   
+* Profiling::                   
+* Asynchronous Timeouts::       
+* Exiting::                     
+@end menu
+
+@node Signal Handling, Control-G (Quit) Checking, Asynchronous Events; Quit Checking, Asynchronous Events; Quit Checking
+@section Signal Handling
+@cindex signal handling
+
+@node Control-G (Quit) Checking, Profiling, Signal Handling, Asynchronous Events; Quit Checking
+@section Control-G (Quit) Checking
+@cindex Control-g checking
+@cindex C-g checking
+@cindex quit checking
+@cindex QUIT checking
+@cindex critical quit
+
+@emph{Note}: The code to handle QUIT is divided between @file{lisp.h}
+and @file{signal.c}.  There is also some special-case code in the async
+timer code in @file{event-stream.c} to notice when the poll-for-quit
+(and poll-for-sigchld) timers have gone off.
+
+Here's an overview of how this convoluted stuff works:
+
+@enumerate
+@item
+
+Scattered throughout the XEmacs core code are calls to the macro QUIT;
+This macro checks to see whether a @kbd{C-g} has recently been pressed
+and not yet handled, and if so, it handles the @kbd{C-g} by calling
+@code{signal_quit()}, which invokes the standard @code{Fsignal()} code,
+with the error being @code{Qquit}.  Lisp code can establish handlers
+for this (using @code{condition-case}), but normally there is no
+handler, and so execution is thrown back to the innermost enclosing
+event loop. (One of the things that happens when entering an event loop
+is that a @code{condition-case} is established that catches @strong{all} calls
+to @code{signal}, including this one.)
+
+@item
+How does the QUIT macro check to see whether @kbd{C-g} has been pressed;
+obviously this needs to be extremely fast.  Now for some history.
+In early Lemacs as inherited from the FSF going back 15 years or
+more, there was a great fondness for using SIGIO (which is sent
+whenever there is I/O available on a given socket, tty, etc.).
+In fact, in GNU Emacs, perhaps even today, all reading of events
+from the X server occurs inside the SIGIO handler!  This is crazy,
+but not completely relevant.  What is relevant is that similar
+stuff happened inside the SIGIO handler for @kbd{C-g}: it searched
+through all the pending (i.e. not yet delivered to XEmacs yet)
+X events for one that matched @kbd{C-g}.  When it saw a match, it set
+Vquit_flag to Qt.  On TTY's, @kbd{C-g} is actually mapped to be the
+interrupt character (i.e. it generates SIGINT), and XEmacs's
+handler for this signal sets Vquit_flag to Qt.  Then, sometime
+later after the signal handlers finished and a QUIT macro was
+called, the macro noticed the setting of @code{Vquit_flag} and used
+this as an indication to call @code{signal_quit()}.  What @code{signal_quit()}
+actually does is set @code{Vquit_flag} to Qnil (so that we won't get
+repeated interruptions from a single @kbd{C-g} press) and then calls
+the equivalent of (signal 'quit nil).
+
+@item
+Another complication is introduced in that Vquit_flag is actually
+exported to Lisp as @code{quit-flag}.  This allows users some level of
+control over whether and when @kbd{C-g} is processed as quit, esp. in
+combination with @code{inhibit-quit}.  This is another Lisp variable,
+and if set to non-nil, it inhibits @code{signal_quit()} from getting
+called, meaning that the @kbd{C-g} gets essentially ignored.  But not
+completely: Because the resetting of @code{quit-flag} happens only
+in @code{signal_quit()}, which isn't getting called, the @kbd{C-g} press is
+still noticed, and as soon as @code{inhibit-quit} is set back to nil,
+a quit will be signalled at the next QUIT macro.  Thus, what
+@code{inhibit-quit} really does is defer quits until after the quit-
+inhibitted period.
+
+@item
+Another consideration, introduced by XEmacs, is critical quitting.  If
+you press @kbd{Control-Shift-G} instead of just @kbd{C-g},
+@code{quit-flag} is set to @code{critical} instead of to t.  When QUIT
+processes this value, it @strong{ignores} the value of
+@code{inhibit-quit}.  This allows you to quit even out of a
+quit-inhibitted section of code!  Furthermore, when @code{signal_quit()}
+notices that it was invoked as a result of a critical quit, it
+automatically invokes the debugger (which otherwise would only happen
+when @code{debug-on-quit} is set to t).
+
+@item
+Well, I explained above about how @code{quit-flag} gets set correctly,
+but I began with a disclaimer stating that this was the old way
+of doing things.  What's done now?  Well, first of all, the SIGIO
+handler (which formerly checked all pending events to see if there's
+a @kbd{C-g}) now does nothing but set a flag -- or actually two flags,
+something_happened and quit_check_signal_happened.  There are two
+flags because the QUIT macro is now used for more than just handling
+QUIT; it's also used for running asynchronous timeout handlers that
+have recently expired, and perhaps other things.  The idea here is
+that the QUIT macros occur extremely often in the code, but only occur
+at places that are relatively safe -- in particular, if an error occurs,
+nothing will get completely trashed.
+
+@item
+Now, let's look at QUIT again.  
+
+@item 
+
+UNFINISHED.  Note, however, that as of the point when this comment got
+committed to CVS (mid-2001), the interaction between reading @kbd{C-g}
+as an event and processing it as QUIT was overhauled to (for the first
+time) be understandable and actually work correctly.  Now, the way
+things work is that if @kbd{C-g} is pressed while XEmacs is blocking at
+the top level, waiting for a user event, it will be read as an event;
+otherwise, it will cause QUIT. (This includes times when XEmacs is
+blocking, but not waiting for a user event,
+e.g. @code{accept-process-output} and
+@code{wait_delaying_user_events()}.)  Formerly, this was supposed to
+happen, but didn't always due to a bizarre and broken scheme, documented
+in @code{next_event_internal} like this:
+
+@quotation
+If we read a @kbd{C-g}, then set @code{quit-flag} but do not discard the
+@kbd{C-g}.  The callers of @code{next_event_internal()} will do one of
+two things:
+
+@enumerate
+@item
+set @code{Vquit_flag} to Qnil. (@code{next-event} does this.) This will
+cause the ^G to be treated as a normal keystroke.
+
+@item
+not change @code{Vquit_flag} but attempt to enqueue the ^G, at which
+point it will be discarded.  The next time QUIT is called, it will
+notice that @code{Vquit_flag} was set.
+@end enumerate
+@end quotation
+
+This required weirdness in @code{enqueue_command_event_1} like this:
+
+@quotation
+put the event on the typeahead queue, unless the event is the quit char,
+in which case the @code{QUIT} which will occur on the next trip through this
+loop is all the processing we should do - leaving it on the queue would
+cause the quit to be processed twice.
+@end quotation
+
+And further weirdness elsewhere, none of which made any sense, and
+didn't work, because (e.g.) it required that QUIT never happen anywhere
+inside @code{next_event_internal()} or any callers when @kbd{C-g} should
+be read as a user event, which was impossible to implement in practice.
+
+Now what we do is fairly simple.  Callers of
+@code{next_event_internal()} that want @kbd{C-g} read as a user event
+call @code{begin_dont_check_for_quit()}.  @code{next_event_internal()},
+when it gets a @kbd{C-g}, simply sets @code{Vquit_flag} (just as when a
+@kbd{C-g} is detected during the operation of @code{QUIT} or
+@code{QUITP}), and then tries to @code{QUIT}.  This will fail if blocked
+by the previous call, at which point @code{next_event_internal()} will
+return the @kbd{C-g} as an event.  To unblock things, first set
+@code{Vquit_flag} to nil (it was set to t when the @kbd{C-g} was read,
+and if we don't reset it, the next call to @code{QUIT} will quit), and
+then @code{unbind_to()} the depth returned by
+@code{begin_dont_check_for_quit()}.  It makes no difference is
+@code{QUIT} is called a zillion times in @code{next_event_internal()} or
+anywhere else, because it's blocked and will never signal.
+@end enumerate
+
+@node Profiling, Asynchronous Timeouts, Control-G (Quit) Checking, Asynchronous Events; Quit Checking
+@section Profiling
+@cindex profiling
+@cindex SIGPROF
+
+We implement our own profiling scheme so that we can determine
+things like which Lisp functions are occupying the most time.  Any
+standard OS-provided profiling works on C functions, which is
+not always that useful -- and inconvenient, since it requires compiling
+with profile info and can't be retrieved dynamically, as XEmacs is
+running.
+
+The basic idea is simple.  We set a profiling timer using setitimer
+(ITIMER_PROF), which generates a SIGPROF every so often.  (This runs not
+in real time but rather when the process is executing or the system is
+running on behalf of the process -- at least, that is the case under
+Unix.  Under MS Windows and Cygwin, there is no @code{setitimer()}, so we
+simulate it using multimedia timers, which run in real time.  To make
+the results a bit more realistic, we ignore ticks that go off while
+blocking on an event wait.  Note that Cygwin does provide a simulation
+of @code{setitimer()}, but it's in real time anyway, since Windows doesn't
+provide a way to have process-time timers, and furthermore, it's broken,
+so we don't use it.) When the signal goes off, we see what we're in, and
+add 1 to the count associated with that function.
+
+It would be nice to use the Lisp allocation mechanism etc. to keep track
+of the profiling information (i.e. to use Lisp hash tables), but we
+can't because that's not safe -- updating the timing information happens
+inside of a signal handler, so we can't rely on not being in the middle
+of Lisp allocation, garbage collection, @code{malloc()}, etc.  Trying to make
+it work would be much more work than it's worth.  Instead we use a basic
+(non-Lisp) hash table, which will not conflict with garbage collection
+or anything else as long as it doesn't try to resize itself.  Resizing
+itself, however (which happens as a result of a @code{puthash()}), could be
+deadly.  To avoid this, we make sure, at points where it's safe
+(e.g. @code{profile_record_about_to_call()} -- recording the entry into a
+function call), that the table always has some breathing room in it so
+that no resizes will occur until at least that many items are added.
+This is safe because any new item to be added in the sigprof would
+likely have the @code{profile_record_about_to_call()} called just before it,
+and the breathing room is checked.
+
+In general: any entry that the sigprof handler puts into the table comes
+from a backtrace frame (except "Processing Events at Top Level", and
+there's only one of those).  Either that backtrace frame was added when
+profiling was on (in which case @code{profile_record_about_to_call()} was
+called and the breathing space updated), or when it was off -- and in
+this case, no such frames can have been added since the last time
+@code{start-profile} was called, so when @code{start-profile} is called we make
+sure there is sufficient breathing room to account for all entries
+currently on the stack.
+
+Jan 1998: In addition to timing info, I have added code to remember call
+counts of Lisp funcalls.  The @code{profile_increase_call_count()}
+function is called from @code{Ffuncall()}, and serves to add data to
+Vcall_count_profile_table.  This mechanism is much simpler and
+independent of the SIGPROF-driven one.  It uses the Lisp allocation
+mechanism normally, since it is not called from a handler.  It may
+even be useful to provide a way to turn on only one profiling
+mechanism, but I haven't done so yet.  --hniksic
+
+Dec 2002: Total overhaul of the interface, making it sane and easier to
+use. --ben
+
+Feb 2003: Lots of rewriting of the internal code.  Add GC-consing-usage,
+total GC usage, and total timing to the information tracked.  Track
+profiling overhead and allow the ability to have internal sections
+(e.g. internal-external conversion, byte-char conversion) that are
+treated like Lisp functions for the purpose of profiling.  --ben
+
+BEWARE: If you are modifying this file, be @strong{very} careful.  Correctly
+implementing the "total" values is very tricky due to the possibility of
+recursion and of functions already on the stack when starting to
+profile/still on the stack when stopping.
+
+@node Asynchronous Timeouts, Exiting, Profiling, Asynchronous Events; Quit Checking
+@section Asynchronous Timeouts
+@cindex asynchronous timeouts
+
+@node Exiting,  , Asynchronous Timeouts, Asynchronous Events; Quit Checking
+@section Exiting
+@cindex exiting
+@cindex crash
+@cindex hang
+@cindex core dump
+@cindex Armageddon
+@cindex exits, expected and unexpected
+@cindex unexpected exits
+@cindex expected exits
+
+Ben's capsule summary about expected and unexpected exits from XEmacs.
+
+Expected exits occur when the user directs XEmacs to exit, for example
+by pressing the close button on the only frame in XEmacs, or by typing
+@kbd{C-x C-c}.  This runs @code{save-buffers-kill-emacs}, which saves
+any necessary buffers, and then exits using the primitive
+@code{kill-emacs}.
+
+However, unexpected exits occur in a few different ways:
+
+@itemize @bullet
+@item
+A memory access violation or other hardware-generated exception occurs.
+This is the worst possible problem to deal with, because the fault can
+occur while XEmacs is in any state whatsoever, even quite unstable ones.
+As a result, we need to be @strong{extremely} careful what we do.
+
+@item
+We are using one X display (or if we've used more, we've closed the
+others already), and some hardware or other problem happens and
+suddenly we've lost our connection to the display.  In this situation,
+things are not so dire as in the last one; our code itself isn't
+trashed, so we can continue execution as normal, after having set
+things up so that we can exit at the appropriate time.  Our exit
+still needs to be of the emergency nature; we have no displays, so
+any attempts to use them will fail.  We simply want to auto-save
+(the single most important thing to do during shut-down), do minimal
+cleanup of stuff that has an independent existence outside of XEmacs,
+and exit.
+@end itemize
+
+Currently, both unexpected exit scenarios described above set
+@code{preparing_for_armageddon} to indicate that nonessential and possibly
+dangerous things should not be done, specifically:
+
+@itemize @minus
+@item
+no garbage collection.
+@item
+no hooks are run.
+@item
+no messages of any sort from autosaving.
+@item
+autosaving tries harder, ignoring certain failures.
+@item
+existing frames are not deleted.
+@end itemize
+
+(Also, all places that set @code{preparing_for_armageddon} also
+set @code{dont_check_for_quit}.  This happens separately because it's
+also necessary to set other variables to make absolutely sure
+no quitting happens.)
+
+In the first scenario above (the access violation), we also set
+@code{fatal_error_in_progress}.  This causes more things to not happen:
+
+@itemize @minus
+@item
+assertion failures do not abort.
+@item
+printing code does not do code conversion or gettext when
+printing to stdout/stderr.
+@end itemize
+
+@node Evaluation; Stack Frames; Bindings, Symbols and Variables, Asynchronous Events; Quit Checking, Top
 @chapter Evaluation; Stack Frames; Bindings
 @cindex evaluation; stack frames; bindings
 @cindex stack frames; bindings, evaluation;
 @cindex bindings, evaluation; stack frames;
 
 @menu
-* Evaluation::
-* Dynamic Binding; The specbinding Stack; Unwind-Protects::
-* Simple Special Forms::
-* Catch and Throw::
+* Evaluation::                  
+* Dynamic Binding; The specbinding Stack; Unwind-Protects::  
+* Simple Special Forms::        
+* Catch and Throw::             
 @end menu
 
-@node Evaluation
+@node Evaluation, Dynamic Binding; The specbinding Stack; Unwind-Protects, Evaluation; Stack Frames; Bindings, Evaluation; Stack Frames; Bindings
 @section Evaluation
 @cindex evaluation
 
@@ -8186,7 +8756,7 @@
 an array).  @code{apply1()} uses @code{Fapply()} while the others use
 @code{Ffuncall()} to do the real work.
 
-@node Dynamic Binding; The specbinding Stack; Unwind-Protects
+@node Dynamic Binding; The specbinding Stack; Unwind-Protects, Simple Special Forms, Evaluation, Evaluation; Stack Frames; Bindings
 @section Dynamic Binding; The specbinding Stack; Unwind-Protects
 @cindex dynamic binding; the specbinding stack; unwind-protects
 @cindex binding; the specbinding stack; unwind-protects, dynamic
@@ -8244,7 +8814,7 @@
 the symbol's value).
 @end enumerate
 
-@node Simple Special Forms
+@node Simple Special Forms, Catch and Throw, Dynamic Binding; The specbinding Stack; Unwind-Protects, Evaluation; Stack Frames; Bindings
 @section Simple Special Forms
 @cindex special forms, simple
 
@@ -8262,7 +8832,7 @@
 compiler knows how to convert calls to these functions directly into
 byte code.
 
-@node Catch and Throw
+@node Catch and Throw,  , Simple Special Forms, Evaluation; Stack Frames; Bindings
 @section Catch and Throw
 @cindex catch and throw
 @cindex throw, catch and
@@ -8323,18 +8893,18 @@
 created since the catch.
 
 
-@node Symbols and Variables, Buffers and Textual Representation, Evaluation; Stack Frames; Bindings, Top
+@node Symbols and Variables, Buffers, Evaluation; Stack Frames; Bindings, Top
 @chapter Symbols and Variables
 @cindex symbols and variables
 @cindex variables, symbols and
 
 @menu
-* Introduction to Symbols::
-* Obarrays::
-* Symbol Values::
+* Introduction to Symbols::     
+* Obarrays::                    
+* Symbol Values::               
 @end menu
 
-@node Introduction to Symbols
+@node Introduction to Symbols, Obarrays, Symbols and Variables, Symbols and Variables
 @section Introduction to Symbols
 @cindex symbols, introduction to
 
@@ -8352,7 +8922,7 @@
 additional values with particular names, and once again the namespace is
 independent of the function and variable namespaces.
 
-@node Obarrays
+@node Obarrays, Symbol Values, Introduction to Symbols, Symbols and Variables
 @section Obarrays
 @cindex obarrays
 
@@ -8420,7 +8990,7 @@
 into any obarray.) Finally, @code{mapatoms} maps over all of the symbols
 in an obarray.
 
-@node Symbol Values
+@node Symbol Values,  , Obarrays, Symbols and Variables
 @section Symbol Values
 @cindex symbol values
 @cindex values, symbol
@@ -8465,22 +9035,18 @@
 well-documented in comments in @file{buffer.c}, @file{symbols.c}, and
 @file{lisp.h}.
 
-@node Buffers and Textual Representation, MULE Character Sets and Encodings, Symbols and Variables, Top
-@chapter Buffers and Textual Representation
-@cindex buffers and textual representation
-@cindex textual representation, buffers and
+@node Buffers, Text, Symbols and Variables, Top
+@chapter Buffers
+@cindex buffers
 
 @menu
 * Introduction to Buffers::     A buffer holds a block of text such as a file.
-* The Text in a Buffer::        Representation of the text in a buffer.
 * Buffer Lists::                Keeping track of all buffers.
 * Markers and Extents::         Tagging locations within a buffer.
-* Ibytes and Ichars::           Representation of individual characters.
 * The Buffer Object::           The Lisp object corresponding to a buffer.
-* Searching and Matching::      Higher-level algorithms.
 @end menu
 
-@node Introduction to Buffers
+@node Introduction to Buffers, Buffer Lists, Buffers, Buffers
 @section Introduction to Buffers
 @cindex buffers, introduction to
 
@@ -8534,7 +9100,196 @@
 window. (This latter distinction is explained in detail in the section
 on windows.)
 
-@node The Text in a Buffer
+@node Buffer Lists, Markers and Extents, Introduction to Buffers, Buffers
+@section Buffer Lists
+@cindex buffer lists
+
+  Recall earlier that buffers are @dfn{permanent} objects, i.e.  that
+they remain around until explicitly deleted.  This entails that there is
+a list of all the buffers in existence.  This list is actually an
+assoc-list (mapping from the buffer's name to the buffer) and is stored
+in the global variable @code{Vbuffer_alist}.
+
+  The order of the buffers in the list is important: the buffers are
+ordered approximately from most-recently-used to least-recently-used.
+Switching to a buffer using @code{switch-to-buffer},
+@code{pop-to-buffer}, etc. and switching windows using
+@code{other-window}, etc.  usually brings the new current buffer to the
+front of the list.  @code{switch-to-buffer}, @code{other-buffer},
+etc. look at the beginning of the list to find an alternative buffer to
+suggest.  You can also explicitly move a buffer to the end of the list
+using @code{bury-buffer}.
+
+  In addition to the global ordering in @code{Vbuffer_alist}, each frame
+has its own ordering of the list.  These lists always contain the same
+elements as in @code{Vbuffer_alist} although possibly in a different
+order.  @code{buffer-list} normally returns the list for the selected
+frame.  This allows you to work in separate frames without things
+interfering with each other.
+
+  The standard way to look up a buffer given a name is
+@code{get-buffer}, and the standard way to create a new buffer is
+@code{get-buffer-create}, which looks up a buffer with a given name,
+creating a new one if necessary.  These operations correspond exactly
+with the symbol operations @code{intern-soft} and @code{intern},
+respectively.  You can also force a new buffer to be created using
+@code{generate-new-buffer}, which takes a name and (if necessary) makes
+a unique name from this by appending a number, and then creates the
+buffer.  This is basically like the symbol operation @code{gensym}.
+
+@node Markers and Extents, The Buffer Object, Buffer Lists, Buffers
+@section Markers and Extents
+@cindex markers and extents
+@cindex extents, markers and
+
+  Among the things associated with a buffer are things that are
+logically attached to certain buffer positions.  This can be used to
+keep track of a buffer position when text is inserted and deleted, so
+that it remains at the same spot relative to the text around it; to
+assign properties to particular sections of text; etc.  There are two
+such objects that are useful in this regard: they are @dfn{markers} and
+@dfn{extents}.
+
+  A @dfn{marker} is simply a flag placed at a particular buffer
+position, which is moved around as text is inserted and deleted.
+Markers are used for all sorts of purposes, such as the @code{mark} that
+is the other end of textual regions to be cut, copied, etc.
+
+  An @dfn{extent} is similar to two markers plus some associated
+properties, and is used to keep track of regions in a buffer as text is
+inserted and deleted, and to add properties (e.g. fonts) to particular
+regions of text.  The external interface of extents is explained
+elsewhere.
+
+  The important thing here is that markers and extents simply contain
+buffer positions in them as integers, and every time text is inserted or
+deleted, these positions must be updated.  In order to minimize the
+amount of shuffling that needs to be done, the positions in markers and
+extents (there's one per marker, two per extent) are stored in Membpos's.
+This means that they only need to be moved when the text is physically
+moved in memory; since the gap structure tries to minimize this, it also
+minimizes the number of marker and extent indices that need to be
+adjusted.  Look in @file{insdel.c} for the details of how this works.
+
+  One other important distinction is that markers are @dfn{temporary}
+while extents are @dfn{permanent}.  This means that markers disappear as
+soon as there are no more pointers to them, and correspondingly, there
+is no way to determine what markers are in a buffer if you are just
+given the buffer.  Extents remain in a buffer until they are detached
+(which could happen as a result of text being deleted) or the buffer is
+deleted, and primitives do exist to enumerate the extents in a buffer.
+
+@node The Buffer Object,  , Markers and Extents, Buffers
+@section The Buffer Object
+@cindex buffer object, the
+@cindex object, the buffer
+
+  Buffers contain fields not directly accessible by the Lisp programmer.
+We describe them here, naming them by the names used in the C code.
+Many are accessible indirectly in Lisp programs via Lisp primitives.
+
+@table @code
+@item name
+The buffer name is a string that names the buffer.  It is guaranteed to
+be unique.  @xref{Buffer Names,,, lispref, XEmacs Lisp Reference
+Manual}.
+
+@item save_modified
+This field contains the time when the buffer was last saved, as an
+integer.  @xref{Buffer Modification,,, lispref, XEmacs Lisp Reference
+Manual}.
+
+@item modtime
+This field contains the modification time of the visited file.  It is
+set when the file is written or read.  Every time the buffer is written
+to the file, this field is compared to the modification time of the
+file.  @xref{Buffer Modification,,, lispref, XEmacs Lisp Reference
+Manual}.
+
+@item auto_save_modified
+This field contains the time when the buffer was last auto-saved.
+
+@item last_window_start
+This field contains the @code{window-start} position in the buffer as of
+the last time the buffer was displayed in a window.
+
+@item undo_list
+This field points to the buffer's undo list.  @xref{Undo,,, lispref,
+XEmacs Lisp Reference Manual}.
+
+@item syntax_table_v
+This field contains the syntax table for the buffer.  @xref{Syntax
+Tables,,, lispref, XEmacs Lisp Reference Manual}.
+
+@item downcase_table
+This field contains the conversion table for converting text to lower
+case.  @xref{Case Tables,,, lispref, XEmacs Lisp Reference Manual}.
+
+@item upcase_table
+This field contains the conversion table for converting text to upper
+case.  @xref{Case Tables,,, lispref, XEmacs Lisp Reference Manual}.
+
+@item case_canon_table
+This field contains the conversion table for canonicalizing text for
+case-folding search.  @xref{Case Tables,,, lispref, XEmacs Lisp
+Reference Manual}.
+
+@item case_eqv_table
+This field contains the equivalence table for case-folding search.
+@xref{Case Tables,,, lispref, XEmacs Lisp Reference Manual}.
+
+@item display_table
+This field contains the buffer's display table, or @code{nil} if it
+doesn't have one.  @xref{Display Tables,,, lispref, XEmacs Lisp
+Reference Manual}.
+
+@item markers
+This field contains the chain of all markers that currently point into
+the buffer.  Deletion of text in the buffer, and motion of the buffer's
+gap, must check each of these markers and perhaps update it.
+@xref{Markers,,, lispref, XEmacs Lisp Reference Manual}.
+
+@item backed_up
+This field is a flag that tells whether a backup file has been made for
+the visited file of this buffer.
+
+@item mark
+This field contains the mark for the buffer.  The mark is a marker,
+hence it is also included on the list @code{markers}.  @xref{The Mark,,,
+lispref, XEmacs Lisp Reference Manual}.
+
+@item mark_active
+This field is non-@code{nil} if the buffer's mark is active.
+
+@item local_var_alist
+This field contains the association list describing the variables local
+in this buffer, and their values, with the exception of local variables
+that have special slots in the buffer object.  (Those slots are omitted
+from this table.)  @xref{Buffer-Local Variables,,, lispref, XEmacs Lisp
+Reference Manual}.
+
+@item modeline_format
+This field contains a Lisp object which controls how to display the mode
+line for this buffer.  @xref{Modeline Format,,, lispref, XEmacs Lisp
+Reference Manual}.
+
+@item base_buffer
+This field holds the buffer's base buffer (if it is an indirect buffer),
+or @code{nil}.
+@end table
+
+@node Text, Multilingual Support, Buffers, Top
+@chapter Text
+@cindex text
+
+@menu
+* The Text in a Buffer::        Representation of the text in a buffer.
+* Ibytes and Ichars::           Representation of individual characters.
+* Byte-Char Position Conversion::  
+* Searching and Matching::      Higher-level algorithms.
+@end menu
+
+@node The Text in a Buffer, Ibytes and Ichars, Text, Text
 @section The Text in a Buffer
 @cindex text in a buffer, the
 @cindex buffer, the text in a
@@ -8676,192 +9431,174 @@
 number of possible alternative representations (e.g. EUC-encoded text,
 etc.).
 
-@node Buffer Lists
-@section Buffer Lists
-@cindex buffer lists
-
-  Recall earlier that buffers are @dfn{permanent} objects, i.e.  that
-they remain around until explicitly deleted.  This entails that there is
-a list of all the buffers in existence.  This list is actually an
-assoc-list (mapping from the buffer's name to the buffer) and is stored
-in the global variable @code{Vbuffer_alist}.
-
-  The order of the buffers in the list is important: the buffers are
-ordered approximately from most-recently-used to least-recently-used.
-Switching to a buffer using @code{switch-to-buffer},
-@code{pop-to-buffer}, etc. and switching windows using
-@code{other-window}, etc.  usually brings the new current buffer to the
-front of the list.  @code{switch-to-buffer}, @code{other-buffer},
-etc. look at the beginning of the list to find an alternative buffer to
-suggest.  You can also explicitly move a buffer to the end of the list
-using @code{bury-buffer}.
-
-  In addition to the global ordering in @code{Vbuffer_alist}, each frame
-has its own ordering of the list.  These lists always contain the same
-elements as in @code{Vbuffer_alist} although possibly in a different
-order.  @code{buffer-list} normally returns the list for the selected
-frame.  This allows you to work in separate frames without things
-interfering with each other.
-
-  The standard way to look up a buffer given a name is
-@code{get-buffer}, and the standard way to create a new buffer is
-@code{get-buffer-create}, which looks up a buffer with a given name,
-creating a new one if necessary.  These operations correspond exactly
-with the symbol operations @code{intern-soft} and @code{intern},
-respectively.  You can also force a new buffer to be created using
-@code{generate-new-buffer}, which takes a name and (if necessary) makes
-a unique name from this by appending a number, and then creates the
-buffer.  This is basically like the symbol operation @code{gensym}.
-
-@node Markers and Extents
-@section Markers and Extents
-@cindex markers and extents
-@cindex extents, markers and
-
-  Among the things associated with a buffer are things that are
-logically attached to certain buffer positions.  This can be used to
-keep track of a buffer position when text is inserted and deleted, so
-that it remains at the same spot relative to the text around it; to
-assign properties to particular sections of text; etc.  There are two
-such objects that are useful in this regard: they are @dfn{markers} and
-@dfn{extents}.
-
-  A @dfn{marker} is simply a flag placed at a particular buffer
-position, which is moved around as text is inserted and deleted.
-Markers are used for all sorts of purposes, such as the @code{mark} that
-is the other end of textual regions to be cut, copied, etc.
-
-  An @dfn{extent} is similar to two markers plus some associated
-properties, and is used to keep track of regions in a buffer as text is
-inserted and deleted, and to add properties (e.g. fonts) to particular
-regions of text.  The external interface of extents is explained
-elsewhere.
-
-  The important thing here is that markers and extents simply contain
-buffer positions in them as integers, and every time text is inserted or
-deleted, these positions must be updated.  In order to minimize the
-amount of shuffling that needs to be done, the positions in markers and
-extents (there's one per marker, two per extent) are stored in Membpos's.
-This means that they only need to be moved when the text is physically
-moved in memory; since the gap structure tries to minimize this, it also
-minimizes the number of marker and extent indices that need to be
-adjusted.  Look in @file{insdel.c} for the details of how this works.
-
-  One other important distinction is that markers are @dfn{temporary}
-while extents are @dfn{permanent}.  This means that markers disappear as
-soon as there are no more pointers to them, and correspondingly, there
-is no way to determine what markers are in a buffer if you are just
-given the buffer.  Extents remain in a buffer until they are detached
-(which could happen as a result of text being deleted) or the buffer is
-deleted, and primitives do exist to enumerate the extents in a buffer.
-
-@node Ibytes and Ichars
+@node Ibytes and Ichars, Byte-Char Position Conversion, The Text in a Buffer, Text
 @section Ibytes and Ichars
 @cindex Ibytes and Ichars
 @cindex Ichars, Ibytes and
 
   Not yet documented.
 
-@node The Buffer Object
-@section The Buffer Object
-@cindex buffer object, the
-@cindex object, the buffer
-
-  Buffers contain fields not directly accessible by the Lisp programmer.
-We describe them here, naming them by the names used in the C code.
-Many are accessible indirectly in Lisp programs via Lisp primitives.
-
-@table @code
-@item name
-The buffer name is a string that names the buffer.  It is guaranteed to
-be unique.  @xref{Buffer Names,,, lispref, XEmacs Lisp Reference
-Manual}.
-
-@item save_modified
-This field contains the time when the buffer was last saved, as an
-integer.  @xref{Buffer Modification,,, lispref, XEmacs Lisp Reference
-Manual}.
-
-@item modtime
-This field contains the modification time of the visited file.  It is
-set when the file is written or read.  Every time the buffer is written
-to the file, this field is compared to the modification time of the
-file.  @xref{Buffer Modification,,, lispref, XEmacs Lisp Reference
-Manual}.
-
-@item auto_save_modified
-This field contains the time when the buffer was last auto-saved.
-
-@item last_window_start
-This field contains the @code{window-start} position in the buffer as of
-the last time the buffer was displayed in a window.
-
-@item undo_list
-This field points to the buffer's undo list.  @xref{Undo,,, lispref,
-XEmacs Lisp Reference Manual}.
-
-@item syntax_table_v
-This field contains the syntax table for the buffer.  @xref{Syntax
-Tables,,, lispref, XEmacs Lisp Reference Manual}.
-
-@item downcase_table
-This field contains the conversion table for converting text to lower
-case.  @xref{Case Tables,,, lispref, XEmacs Lisp Reference Manual}.
-
-@item upcase_table
-This field contains the conversion table for converting text to upper
-case.  @xref{Case Tables,,, lispref, XEmacs Lisp Reference Manual}.
-
-@item case_canon_table
-This field contains the conversion table for canonicalizing text for
-case-folding search.  @xref{Case Tables,,, lispref, XEmacs Lisp
-Reference Manual}.
-
-@item case_eqv_table
-This field contains the equivalence table for case-folding search.
-@xref{Case Tables,,, lispref, XEmacs Lisp Reference Manual}.
-
-@item display_table
-This field contains the buffer's display table, or @code{nil} if it
-doesn't have one.  @xref{Display Tables,,, lispref, XEmacs Lisp
-Reference Manual}.
-
-@item markers
-This field contains the chain of all markers that currently point into
-the buffer.  Deletion of text in the buffer, and motion of the buffer's
-gap, must check each of these markers and perhaps update it.
-@xref{Markers,,, lispref, XEmacs Lisp Reference Manual}.
-
-@item backed_up
-This field is a flag that tells whether a backup file has been made for
-the visited file of this buffer.
-
-@item mark
-This field contains the mark for the buffer.  The mark is a marker,
-hence it is also included on the list @code{markers}.  @xref{The Mark,,,
-lispref, XEmacs Lisp Reference Manual}.
-
-@item mark_active
-This field is non-@code{nil} if the buffer's mark is active.
-
-@item local_var_alist
-This field contains the association list describing the variables local
-in this buffer, and their values, with the exception of local variables
-that have special slots in the buffer object.  (Those slots are omitted
-from this table.)  @xref{Buffer-Local Variables,,, lispref, XEmacs Lisp
-Reference Manual}.
-
-@item modeline_format
-This field contains a Lisp object which controls how to display the mode
-line for this buffer.  @xref{Modeline Format,,, lispref, XEmacs Lisp
-Reference Manual}.
-
-@item base_buffer
-This field holds the buffer's base buffer (if it is an indirect buffer),
-or @code{nil}.
-@end table
-
-@node Searching and Matching
+@node Byte-Char Position Conversion, Searching and Matching, Ibytes and Ichars, Text
+@section Byte-Char Position Conversion
+@cindex byte-char position conversion
+@cindex position conversion, byte-char
+@cindex conversion, byte-char position
+
+Oct 2004:
+
+This is what I wrote when describing the previous algorithm:
+
+@quotation
+The basic algorithm we use is to keep track of a known region of
+characters in each buffer, all of which are of the same width.  We keep
+track of the boundaries of the region in both Charbpos and Bytebpos
+coordinates and also keep track of the char width, which is 1 - 4 bytes.
+If the position we're translating is not in the known region, then we
+invoke a function to update the known region to surround the position in
+question.  This assumes locality of reference, which is usually the
+case.
+
+Note that the function to update the known region can be simple or
+complicated depending on how much information we cache.  In addition to
+the known region, we always cache the correct conversions for point,
+BEGV, and ZV, and in addition to this we cache 16 positions where the
+conversion is known.  We only look in the cache or update it when we
+need to move the known region more than a certain amount (currently 50
+chars), and then we throw away a "random" value and replace it with the
+newly calculated value.
+
+Finally, we maintain an extra flag that tracks whether the buffer is
+entirely ASCII, to speed up the conversions even more.  This flag is
+actually of dubious value because in an entirely-ASCII buffer the known
+region will always span the entire buffer (in fact, we update the flag
+based on this fact), and so all we're saving is a few machine cycles.
+
+A potentially smarter method than what we do with known regions and
+cached positions would be to keep some sort of pseudo-extent layer over
+the buffer; maybe keep track of the charbpos/bytebpos correspondence at
+the beginning of each line, which would allow us to do a binary search
+over the pseudo-extents to narrow things down to the correct line, at
+which point you could use a linear movement method.  This would also
+mesh well with efficiently implementing a line-numbering scheme.
+However, you have to weigh the amount of time spent updating the cache
+vs. the savings that result from it.  In reality, we modify the buffer
+far less often than we access it, so a cache of this sort that provides
+guaranteed LOG (N) performance (or perhaps N * LOG (N), if we set a
+maximum on the cache size) would indeed be a win, particularly in very
+large buffers.  If we ever implement this, we should probably set a
+reasonably high minimum below which we use the old method, because the
+time spent updating the fancy cache would likely become dominant when
+making buffer modifications in smaller buffers.
+
+Note also that we have to multiply or divide by the char width in order
+to convert the positions.  We do some tricks to avoid ever actually
+having to do a multiply or divide, because that is typically an
+expensive operation (esp. divide).  Multiplying or dividing by 1, 2, or
+4 can be implemented simply as a shift left or shift right, and we keep
+track of a shifter value (0, 1, or 2) indicating how much to shift.
+Multiplying by 3 can be implemented by doubling and then adding the
+original value.  Dividing by 3, alas, cannot be implemented in any
+simple shift/subtract method, as far as I know; so we just do a table
+lookup.  For simplicity, we use a table of size 128K, which indexes the
+"divide-by-3" values for the first 64K non-negative numbers. (Note that
+we can increase the size up to 384K, i.e. indexing the first 192K
+non-negative numbers, while still using shorts in the array.) This also
+means that the size of the known region can be at most 64K for
+width-three characters.
+@end quotation
+
+Unfortunately, it turned out that the implementation had serious problems
+which had never been corrected.  In particular, the known region had a
+large tendency to become zero-length and stay that way.
+
+So I decided to port the algorithm from FSF 21.3, in markers.c.
+
+This algorithm is fairly simple.  Instead of using markers I kept the cache
+array of known positions from the previous implementation.
+
+Basically, we keep a number of positions cached:
+
+@itemize @bullet
+@item
+the actual end of the buffer
+@item
+the beginning and end of the accessible region
+@item
+the value of point
+@item
+the position of the gap
+@item
+the last value we computed
+@item
+a set of positions that are "far away" from previously computed positions
+(5000 chars currently; #### perhaps should be smaller)
+@end itemize
+
+For each position, we @code{CONSIDER()} it.  This means:
+
+@itemize @bullet
+@item
+If the position is what we're looking for, return it directly.
+@item
+Starting with the beginning and end of the buffer, we successively
+compute the smallest enclosing range of known positions.  If at any
+point we discover that this range has the same byte and char length
+(i.e. is entirely single-byte), then our computation is trivial.
+@item
+If at any point we get a small enough range (50 chars currently),
+stop considering further positions.
+@end itemize
+
+Otherwise, once we have an enclosing range, see which side is closer, and
+iterate until we find the desired value.  As an optimization, I replaced
+the simple loop in FSF with the use of @code{bytecount_to_charcount()},
+@code{charcount_to_bytecount()}, @code{bytecount_to_charcount_down()}, or
+@code{charcount_to_bytecount_down()}. (The latter two I added for this purpose.) 
+These scan 4 or 8 bytes at a time through purely single-byte characters.
+
+If the amount we had to scan was more than our "far away" distance (5000
+characters, see above), then cache the new position.
+
+#### Things to do:
+
+@itemize @bullet
+@item
+Look at the most recent GNU Emacs to see whether anything has changed.
+@item
+Think about whether it makes sense to try to implement some sort of
+known region or list of "known regions", like we had before.  This would
+be a region of entirely single-byte characters that we can check very
+quickly. (Previously I used a range of same-width characters of any
+size; but this adds extra complexity and slows down the scanning, and is
+probably not worth it.) As part of the scanning process in
+@code{bytecount_to_charcount()} et al, we skip over chunks of entirely
+single-byte chars, so it should be easy to remember the last one.
+Presumably what we should do is keep track of the largest known surrounding
+entirely-single-byte region for each of the cache positions as well as
+perhaps the last-cached position.  We want to be careful not to get bitten
+by the previous problem of having the known region getting reset too
+often.  If we implement this, we might well want to continue scanning
+some distance past the desired position (maybe 300-1000 bytes) if we are
+in a single-byte range so that we won't end up expanding the known range
+one position at a time and entering the function each time.
+@item
+Think about whether it makes sense to keep the position cache sorted.
+This would allow it to be larger and finer-grained in its positions.
+Note that with FSF's use of markers, they were sorted, but this
+was not really made good use of.  With an array, we can do binary searching
+to quickly find the smallest range.  We would probably want to make use of
+the gap-array code in extents.c.
+@end itemize
+
+Note that FSF's algorithm checked @strong{ALL} markers, not just the ones cached
+by this algorithm.  This includes markers created by the user as well as
+both ends of any overlays.  We could do similarly, and our extents could
+keep both byte and character positions rather than just the former.  (But
+this would probably be overkill.  We should just use our cache instead.
+Any place an extent was set was surely already visited by the char<-->byte
+conversion routines.)
+
+@node Searching and Matching,  , Byte-Char Position Conversion, Text
 @section Searching and Matching
 @cindex searching
 @cindex matching
@@ -9082,13 +9819,23 @@
 But if you keep your eye on the "switch in a loop" structure, you
 should be able to understand the parts you need.
 
-
-@node MULE Character Sets and Encodings, The Lisp Reader and Compiler, Buffers and Textual Representation, Top
-@chapter MULE Character Sets and Encodings
+@node Multilingual Support, The Lisp Reader and Compiler, Text, Top
+@chapter Multilingual Support
 @cindex Mule character sets and encodings
 @cindex character sets and encodings, Mule
 @cindex encodings, Mule character sets and
 
+@emph{NOTE}: There is a great deal of overlapping and redundant
+information in this chapter.  Ben wrote introductions to Mule issues a
+number of times, each time not realizing that he had already written
+another introduction previously.  Hopefully, in time these will all be
+integrated.
+
+  @emph{NOTE}: The information at the top of the source file
+@file{text.c} is more complete than the following, and there is also a
+list of all other places to look for text/I18N-related info.  Also look in
+@file{text.h} for info about the DFC and Eistring API's.
+
   Recall that there are two primary ways that text is represented in
 XEmacs.  The @dfn{buffer} representation sees the text as a series of
 bytes (Ibytes), with a variable number of bytes used per character.
@@ -9102,24 +9849,660 @@
 representation is that it's compact and is compatible with ASCII.
 
 @menu
-* Character Sets::
-* Encodings::
-* Internal Mule Encodings::
-* CCL::
+* Introduction to Multilingual Issues #1::  
+* Introduction to Multilingual Issues #2::  
+* Introduction to Multilingual Issues #3::  
+* Introduction to Multilingual Issues #4::  
+* Character Sets::              
+* Encodings::                   
+* Internal Mule Encodings::     
+* Byte/Character Types; Buffer Positions; Other Typedefs::  
+* Internal Text API's::         
+* Coding for Mule::             
+* CCL::                         
+* Modules for Internationalization::  
 @end menu
 
-@node Character Sets
+@node Introduction to Multilingual Issues #1, Introduction to Multilingual Issues #2, Multilingual Support, Multilingual Support
+@section Introduction to Multilingual Issues #1
+@cindex introduction to multilingual issues #1
+
+There is an introduction to these issues in the Lisp Reference manual.
+@xref{Internationalization Terminology,,, lispref, XEmacs Lisp Reference
+Manual}.  Among other documentation that may be of interest to internals
+programmers is ISO-2022 (@pxref{ISO 2022,,, lispref, XEmacs Lisp
+Reference Manual}) and CCL (@pxref{CCL,,, lispref, XEmacs Lisp Reference
+Manual})
+
+@node Introduction to Multilingual Issues #2, Introduction to Multilingual Issues #3, Introduction to Multilingual Issues #1, Multilingual Support
+@section Introduction to Multilingual Issues #2
+@cindex introduction to multilingual issues #2
+
+@subheading Introduction
+
+This document covers a number of design issues, problems and proposals
+with regards to XEmacs MULE.  At first we present some definitions and
+some aspects of the design that have been agreed upon.  Then we present
+some issues and problems that need to be addressed, and then I include a
+proposal of mine to address some of these issues.  When there are other
+proposals, for example from Olivier, these will be appended to the end
+of this document.
+
+@subheading Definitions and Design Basics
+
+First, @dfn{text} is defined to be a series of characters which together
+defines an utterance or partial utterance in some language.
+Generally, this language is a human language, but it may also be a
+computer language if the computer language uses a representation close
+enough to that of human languages for it to also make sense to call its
+representation text.  Text is opposed to @dfn{binary}, which is a sequence
+of bytes, representing machine-readable but not human-readable data.
+A @dfn{byte} is merely a number within a predefined range, which nowadays is
+nearly always zero to 255.  A @dfn{character} is a unit of text.  What makes
+one character different from another is not always clear-cut.  It is
+generally related to the appearance of the character, although perhaps
+not any possible appearance of that character, but some sort of ideal
+appearance that is assigned to a character.  Whether two characters
+that look very similar are actually the same depends on various
+factors such as political ones, such as whether the characters are
+used to mean similar sorts of things, or behave similarly in similar
+contexts.  In any case, it is not always clearly defined whether two
+characters are actually the same or not.  In practice, however, this
+is more or less agreed upon.
+
+A @dfn{character set} is just that, a set of one or more characters.
+The set is unique in that there will not be more than one instance of
+the same character in a character set, and logically is unordered,
+although an order is often imposed or suggested for the characters in
+the character set.  We can also define an @dfn{order} on a character
+set, which is a way of assigning a unique number, or possibly a pair of
+numbers, or a triplet of numbers, or even a set of four or more numbers
+to each character in the character set.  The combination of an order in
+the character set results in an @dfn{ordered character set}.  In an
+ordered character set, there is an upper limit and a lower limit on the
+possible values that a character, or that any number within the set of
+numbers assigned to a character, can take.  However, the lower limit
+does not have to start at zero or one, or anywhere else in particular,
+nor does the upper limit have to end anywhere particular, and there may
+be gaps within these ranges such that particular numbers or sets of
+numbers do not have a corresponding character, even though they are
+within the upper and lower limits.  For example, @dfn{ASCII} defines a
+very standard ordered character set.  It is normally defined to be 94
+characters in the range 33 through 126 inclusive on both ends, with
+every possible character within this range being actually present in the
+character set.
+
+Sometimes the ASCII character set is extended to include what are called
+@dfn{non-printing characters}.  Non-printing characters are characters
+which instead of really being displayed in a more or less rectangular
+block, like all other characters, instead indicate certain functions
+typically related to either control of the display upon which the
+characters are being displayed, or have some effect on a communications
+channel that may be currently open and transmitting characters, or may
+change the meaning of future characters as they are being decoded, or
+some other similar function.  You might say that non-printing characters
+are somewhat of a hack because they are a special exception to the
+standard concept of a character as being a printed glyph that has some
+direct correspondence in the non-computer world.
+
+With non-printing characters in mind, the 94-character ordered character
+set called ASCII is often extended into a 96-character ordered character
+set, also often called ASCII, which includes in addition to the 94
+characters already mentioned, two non-printing characters, one called
+space and assigned the number 32, just below the bottom of the previous
+range, and another called @dfn{delete} or @dfn{rubout}, which is given
+number 127 just above the end of the previous range.  Thus to reiterate,
+the result is a 96-character ordered character set, whose characters
+take the values from 32 to 127 inclusive.  Sometimes ASCII is further
+extended to contain 32 more non-printing characters, which are given the
+numbers zero through 31 so that the result is a 128-character ordered
+character set with characters numbered zero through 127, and with many
+non-printing characters.  Another way to look at this, and the way that
+is normally taken by XEmacs MULE, is that the characters that would be
+in the range 30 through 31 in the most extended definition of ASCII,
+instead form their own ordered character set, which is called
+@dfn{control zero}, and consists of 32 characters in the range zero
+through 31.  A similar ordered character set called @dfn{control one} is
+also created, and it contains 32 more non-printing characters in the
+range 128 through 159.  Note that none of these three ordered character
+sets overlaps in any of the numbers they are assigned to their
+characters, so they can all be used at once.  Note further that the same
+character can occur in more than one character set.  This was shown
+above, for example, in two different ordered character sets we defined,
+one of which we could have called @dfn{ASCII}, and the other
+@dfn{ASCII-extended}, to show that it had extended by two non-printable
+characters.  Most of the characters in these two character sets are
+shared and present in both of them.
+
+Note that there is no restriction on the size of the character set, or
+on the numbers that are assigned to characters in an ordered character
+set.  It is often extremely useful to represent a sequence of characters
+as a sequence of bytes, where a byte as defined above is a number in the
+range zero to 255.  An @dfn{encoding} does precisely this.  It is simply
+a mapping from a sequence of characters, possibly augmented with
+information indicating the character set that each of these characters
+belongs to, to a sequence of bytes which represents that sequence of
+characters and no other -- which is to say the mapping is reversible.
+
+A @dfn{coding system} is a set of rules for encoding a sequence of
+characters augmented with character set information into a sequence of
+bytes, and later performing the reverse operation.  It is frequently
+possible to group coding systems into classes or types based on common
+features.  Typically, for example, a particular coding system class
+may contain a base coding system which specifies some of the rules,
+but leaves the rest unspecified.  Individual members of the coding
+system class are formed by starting with the base coding system, and
+augmenting it with additional rules to produce a particular coding
+system, what you might think of as a sort of variation within a
+theme.
+
+@subheading XEmacs Specific Definitions
+
+First of all, in XEmacs, the concept of character is a little different
+from the general definition given above.  For one thing, the character
+set that a character belongs to may or may not be an inherent part of
+the character itself.  In other words, the same character occurring in
+two different character sets may appear in XEmacs as two different
+characters.  This is generally the case now, but we are attempting to
+move in the other direction.  Different proposals may have different
+ideas about exactly the extent to which this change will be carried out.
+The general trend, though, is to represent all information about a
+character other than the character itself, using text properties
+attached to the character.  That way two instances of the same character
+will look the same to lisp code that merely retrieves the character, and
+does not also look at the text properties of that character.  Everyone
+involved is in agreement in doing it this way with all Latin characters,
+and in fact for all characters other than Chinese, Japanese, and Korean
+ideographs.  For those, there may be a difference of opinion.
+
+A second difference between the general definition of character and the
+XEmacs usage of character is that each character is assigned a unique
+number that distinguishes it from all other characters in the world, or
+at the very least, from all other characters currently existing anywhere
+inside the current XEmacs invocation.  (If there is a case where the
+weaker statement applies, but not the stronger statement, it would
+possibly be with composite characters and any other such characters that
+are created on the sly.)
+
+This unique number is called the @dfn{character representation} of the
+character, and its particular details are a matter of debate.  There is
+the current standard in use that it is undoubtedly going to change.
+What has definitely been agreed upon is that it will be an integer, more
+specifically a positive integer, represented with less than or equal to
+31 bits on a 32-bit architecture, and possibly up to 63 bits on a 64-bit
+architecture, with the proviso that any characters that whose
+representation would fit in a 64-bit architecture, but not on a 32-bit
+architecture, would be used only for composite characters, and others
+that would satisfy the weak uniqueness property mentioned above, but not
+with the strong uniqueness property.
+
+At this point, it is useful to talk about the different representations
+that a sequence of characters can take.  The simplest representation is
+simply as a sequence of characters, and this is called the @dfn{Lisp
+representation} of text, because it is the representation that Lisp
+programs see.  Other representations include the external
+representation, which refers to any encoding of the sequence of
+characters, using the definition of encoding mentioned above.
+Typically, text in the external representation is used outside of
+XEmacs, for example in files, e-mail messages, web sites, and the like.
+Another representation for a sequence of characters is what I will call
+the @dfn{byte representation}, and it represents the way that XEmacs
+internally represents text in a buffer, or in a string.  Potentially,
+the representation could be different between a buffer and a string, and
+then the terms @dfn{buffer byte representation} and @dfn{string byte
+representation} would be used, but in practice I don't think this will
+occur.  It will be possible, of course, for buffers and strings, or
+particular buffers and particular strings, to contain different
+sub-representations of a single representation.  For example, Olivier's
+1-2-4 proposal allows for three sub-representations of his internal byte
+representation, allowing for 1 byte, 2 bytes, and 4 byte width
+characters respectively.  A particular string may be in one
+sub-representation, and a particular buffer in another
+sub-representation, but overall both are following the same byte
+representation.  I do not use the term @dfn{internal representation}
+here, as many people have, because it is potentially ambiguous.
+
+Another representation is called the @dfn{array of characters
+representation}.  This is a representation on the C-level in which the
+sequence of text is represented, not using the byte representation, but
+by using an array of characters, each represented using the character
+representation.  This sort of representation is often used by redisplay
+because it is more convenient to work with than any of the other
+internal representations.
+
+The term @dfn{binary representation} may also be heard.  Binary
+representation is used to represent binary data.  When binary data is
+represented in the lisp representation, an equivalence is simply set up
+between bytes zero through 255, and characters zero through 255.  These
+characters come from four character sets, which are from bottom to top,
+control zero, ASCII, control 1, and Latin 1.  Together, they comprise
+256 characters, and are a good mapping for the 256 possible bytes in a
+binary representation.  Binary representation could also be used to
+refer to an external representation of the binary data, which is a
+simple direct byte-to-byte representation.  No internal representation
+should ever be referred to as a binary representation because of
+ambiguity.  The terms character set/encoding system were defined
+generally, above.  In XEmacs, the equivalent concepts exist, although
+character set has been shortened to charset, and in fact represents
+specifically an ordered character set.  For each possible charset, and
+for each possible coding system, there is an associated object in
+XEmacs.  These objects will be of type charset and coding system,
+respectively.  Charsets and coding systems are divided into classes, or
+@dfn{types}, the normal term under XEmacs, and all possible charsets
+encoding systems that may be defined must be in one of these types.  If
+you need to create a charset or coding system that is not one of these
+types, you will have to modify the C code to support this new type.
+Some of the existing or soon-to-be-created types are, or will be,
+generic enough so that this shouldn't be an issue.  Note also that the
+byte encoding for text and the character coding of a character are
+closely related.  You might say that ideally each is the simplest
+equivalent of the other given the general constraints on each
+representation.
+
+To be specific, in the current MULE representation,
+
+@enumerate
+@item
+Characters encode both the character itself and the character set
+that it comes from.  These character sets are always assumed to be
+representable as an ordered character set of size 96 or of size 96
+by 96, or the trivially-related sizes 94 and 94 by 94.  The only
+allowable exceptions are the control zero and control one character
+sets, which are of size 32.  Character sets which do not naturally
+have a compatible ordering such as this are shoehorned into an
+ordered character set, or possibly two ordered character sets of a
+compatible size.
+@item
+The variable width byte representation was deliberately chosen to
+allow scanning text forwards and backwards efficiently.  This
+necessitated defining the possible bytes into three ranges which
+we shall call A, B, and C.  Range A is used exclusively for
+single-byte characters, which is to say characters that are
+representing using only one contiguous byte.  Multi-byte
+characters are always represented by using one byte from Range B,
+followed by one or more bytes from Range C.  What this means is
+that bytes that begin a character are unequivocally distinguished
+from bytes that do not begin a character, and therefore there is
+never a problem scaling backwards and finding the beginning of a
+character.  Know that UTF8 adopts a proposal that is very similar
+in spirit in that it uses separate ranges for the first byte of a
+multi byte sequence, and the following bytes in multi-byte
+sequence.
+@item
+Given the fact that all ordered character sets allowed were
+essentially 96 characters per dimension, it made perfect sense to
+make Range C comprise 96 bytes.  With a little more tweaking, the
+currently-standard MULE byte representation was created, and was
+drafted from this.
+@item
+The MULE byte representation defined four basic representations for
+characters, which would take up from one to four bytes,
+respectively.  The MULE character representation thus had the
+following constraints:
+@enumerate
+@item
+Character numbers zero through 255 should represent the
+characters that binary values zero through 255 would be
+mapped onto.  (Note: this was not the case in Kenichi Handa's
+version of this representation, but I changed it.)
+@item
+The four sub-classes of representation in the MULE byte
+representation should correspond to four contiguous
+non-overlapping ranges of characters.
+@item
+The algorithmic conversion between the single character
+represented in the byte representation and in the character
+representation should be as easy as possible.
+@item
+Given the previous constraints, the character representation
+should be as compact as possible, which is to say it should
+use the least number of bits possible.
+@end enumerate
+@end enumerate
+
+So you see that the entire structure of the byte and character
+representations stemmed from a very small number of basic choices,
+which were
+
+@enumerate
+@item
+the choice to encode character set information in a character
+@item
+the choice to assume that all character sets would have an order
+imposed upon them with 96 characters per one or two
+dimensions. (This is less arbitrary than it seems--it follows
+ISO-2022)
+@item
+the choice to use a variable width byte representation.
+@end enumerate
+
+What this means is that you cannot really separate the byte
+representation, the character representation, and the assumptions made
+about characters and whether they represent character sets from each
+other.  All of these are closely intertwined, and for purposes of
+simplicity, they should be designed together.  If you change one
+representation without changing another, you are in essence creating a
+completely new design with its own attendant problems--since your new
+design is likely to be quite complex and not very coherent with
+regards to the translation between the character and byte
+representations, you are likely to run into problems.
+
+@node Introduction to Multilingual Issues #3, Introduction to Multilingual Issues #4, Introduction to Multilingual Issues #2, Multilingual Support
+@section Introduction to Multilingual Issues #3
+@cindex introduction to multilingual issues #3
+
+In XEmacs, Mule is a code word for the support for input handling and
+display of multi-lingual text.  This section provides an overview of how
+this support impacts the C and Lisp code in XEmacs.  It is important for
+anyone who works on the C or the Lisp code, especially on the C code, to
+be aware of these issues, even if they don't work directly on code that
+implements multi-lingual features, because there are various general
+procedures that need to be followed in order to write Mule-compliant
+code.  (The specifics of these procedures are documented elsewhere in
+this manual.)
+
+There are four primary aspects of Mule support:
+
+@enumerate
+@item
+internal handling and representation of multi-lingual text.
+@item
+conversion between the internal representation of text and the various
+external representations in which multi-lingual text is encoded, such as
+Unicode representations (including mostly fixed width encodings such as
+UCS-2/UTF-16 and UCS-4 and variable width ASCII conformant encodings,
+such as UTF-7 and UTF-8); the various ISO2022 representations, which
+typically use escape sequences to switch between different character
+sets (such as Compound Text, used under X Windows; JIS, used
+specifically for encoding Japanese; and EUC, a non-modal encoding used
+for Japanese, Korean, and certain other languages); Microsoft's
+multi-byte encodings (such as Shift-JIS); various simple encodings for
+particular 8-bit character sets (such as Latin-1 and Latin-2, and
+encodings (such as koi8 and Alternativny) for Cyrillic); and others.
+This conversion needs to happen both for text in files and text sent to
+or retrieved from system API calls.  It even needs to happen for
+external binary data because the internal representation does not
+represent binary data simply as a sequence of bytes as it is represented
+externally.
+@item
+Proper display of multi-lingual characters.
+@item
+Input of multi-lingual text using the keyboard.
+@end enumerate
+
+These four aspects are for the most part independent of each other.
+
+@subheading Characters, Character Sets, and Encodings
+
+A @dfn{character} (which is, BTW, a surprisingly complex concept) is, in
+a written representation of text, the most basic written unit that has a
+meaning of its own.  It's comparable to a phoneme when analyzing words
+in spoken speech (for example, the sound of @samp{t} in English, which
+in fact has different pronunciations in different words -- aspirated in
+@samp{time}, unaspirated in @samp{stop}, unreleased or even pronounced
+as a glottal stop in @samp{button}, etc. -- but logically is a single
+concept).  Like a phoneme, a character is an abstract concept defined by
+its @emph{meaning}.  The character @samp{lowercase f}, for example, can
+always be used to represent the first letter in the word @samp{fill},
+regardless of whether it's drawn upright or italic, whether the
+@samp{fi} combination is drawn as a single ligature, whether there are
+serifs on the bottom of the vertical stroke, etc. (These different
+appearances of a single character are often called @dfn{graphs} or
+@dfn{glyphs}.) Our concern when representing text is on representing the
+abstract characters, and not on their exact appearance.
+
+A @dfn{character set} (or @dfn{charset}), as we define it, is a set of
+characters, each with an associated number (or set of numbers -- see
+below), called a @dfn{code point}.  It's important to understand that a
+character is not defined by any number attached to it, but by its
+meaning.  For example, ASCII and EBCDIC are two charsets containing
+exactly the same characters (lowercase and uppercase letters, numbers 0
+through 9, particular punctuation marks) but with different
+numberings. The `comma' character in ASCII and EBCDIC, for instance, is
+the same character despite having a different numbering.  Conversely,
+when comparing ASCII and JIS-Roman, which look the same except that the
+latter has a yen sign substituted for the backslash, we would say that
+the backslash and yen sign are @strong{not} the same characters, despite having
+the same number (95) and despite the fact that all other characters are
+present in both charsets, with the same numbering.  ASCII and JIS-Roman,
+then, do @emph{not} have exactly the same characters in them (ASCII has
+a backslash character but no yen-sign character, and vice-versa for
+JIS-Roman), unlike ASCII and EBCDIC, even though the numberings in ASCII
+and JIS-Roman are closer.
+
+It's also important to distinguish between charsets and encodings.  For
+a simple charset like ASCII, there is only one encoding normally used --
+each character is represented by a single byte, with the same value as
+its code point.  For more complicated charsets, however, things are not
+so obvious.  Unicode version 2, for example, is a large charset with
+thousands of characters, each indexed by a 16-bit number, often
+represented in hex, e.g. 0x05D0 for the Hebrew letter "aleph".  One
+obvious encoding uses two bytes per character (actually two encodings,
+depending on which of the two possible byte orderings is chosen).  This
+encoding is convenient for internal processing of Unicode text; however,
+it's incompatible with ASCII, so a different encoding, e.g. UTF-8, is
+usually used for external text, for example files or e-mail.  UTF-8
+represents Unicode characters with one to three bytes (often extended to
+six bytes to handle characters with up to 31-bit indices).  Unicode
+characters 00 to 7F (identical with ASCII) are directly represented with
+one byte, and other characters with two or more bytes, each in the range
+80 to FF.
+
+In general, a single encoding may be able to represent more than one
+charset.
+
+@subheading Internal Representation of Text
+
+In an ASCII or single-European-character-set world, life is very simple.
+There are 256 characters, and each character is represented using the
+numbers 0 through 255, which fit into a single byte.  With a few
+exceptions (such as case-changing operations or syntax classes like
+'whitespace'), "text" is simply an array of indices into a font.  You
+can get different languages simply by choosing fonts with different
+8-bit character sets (ISO-8859-1, -2, special-symbol fonts, etc.), and
+everything will "just work" as long as anyone else receiving your text
+uses a compatible font.
+
+In the multi-lingual world, however, it is much more complicated.  There
+are a great number of different characters which are organized in a
+complex fashion into various character sets.  The representation to use
+is not obvious because there are issues of size versus speed to
+consider.  In fact, there are in general two kinds of representations to
+work with: one that represents a single character using an integer
+(possibly a byte), and the other representing a single character as a
+sequence of bytes.  The former representation is normally called fixed
+width, and the other variable width. Both representations represent
+exactly the same characters, and the conversion from one representation
+to the other is governed by a specific formula (rather than by table
+lookup) but it may not be simple.  Most C code need not, and in fact
+should not, know the specifics of exactly how the representations work.
+In fact, the code must not make assumptions about the representations.
+This means in particular that it must use the proper macros for
+retrieving the character at a particular memory location, determining
+how many characters are present in a particular stretch of text, and
+incrementing a pointer to a particular character to point to the
+following character, and so on.  It must not assume that one character
+is stored using one byte, or even using any particular number of bytes.
+It must not assume that the number of characters in a stretch of text
+bears any particular relation to a number of bytes in that stretch.  It
+must not assume that the character at a particular memory location can
+be retrieved simply by dereferencing the memory location, even if a
+character is known to be ASCII or is being compared with an ASCII
+character, etc.  Careful coding is required to be Mule clean.  The
+biggest work of adding Mule support, in fact, is converting all of the
+existing code to be Mule clean.
+
+Lisp code is mostly unaffected by these concerns.  Text in strings and
+buffers appears simply as a sequence of characters regardless of
+whether Mule support is present.  The biggest difference with older
+versions of Emacs, as well as current versions of GNU Emacs, is that
+integers and characters are no longer equivalent, but are separate
+Lisp Object types.
+
+@subheading Conversion Between Internal and External Representations
+
+All text needs to be converted to an external representation before being
+sent to a function or file, and all text retrieved from a function of
+file needs to be converted to the internal representation.  This
+conversion needs to happen as close to the source or destination of the
+text as possible.  No operations should ever be performed on text encoded
+in an external representation other than simple copying, because no
+assumptions can reliably be made about the format of this text.  You
+cannot assume, for example, that the end of text is terminated by a null
+byte. (For example, if the text is Unicode, it will have many null bytes
+in it.)  You cannot find the next "slash" character by searching through
+the bytes until you find a byte that looks like a "slash" character,
+because it might actually be the second byte of a Kanji character.
+Furthermore, all text in the internal representation must be converted,
+even if it is known to be completely ASCII, because the external
+representation may not be ASCII compatible (for example, if it is
+Unicode).
+
+The place where C code needs to be the most careful is when calling
+external API functions.  It is easy to forget that all text passed to or
+retrieved from these functions needs to be converted.  This includes text
+in structures passed to or retrieved from these functions and all text
+that is passed to a callback function that is called by the system.
+
+Macros are provided to perform conversions to or from external text.
+These macros are called TO_EXTERNAL_FORMAT and TO_INTERNAL_FORMAT
+respectively.  These macros accept input in various forms, for example,
+Lisp strings, buffers, lstreams, raw data, and can return data in
+multiple formats, including both @code{malloc()}ed and @code{alloca()}ed data.  The use
+of @code{alloca()}ed data here is particularly important because, in general,
+the returned data will not be used after making the API call, and as a
+result, using @code{alloca()}ed data provides a very cheap and easy to use
+method of allocation.
+
+These macros take a coding system argument which indicates the nature of
+the external encoding.  A coding system is an object that encapsulates
+the structures of a particular external encoding and the methods required
+to convert to and from this encoding.  A facility exists to create coding
+system aliases, which in essence gives a single coding system two
+different names.  It is effectively used in XEmacs to provide a layer of
+abstraction on top of the actual coding systems.  For example, the coding
+system alias "file-name" points to whichever coding system is currently
+used for encoding and decoding file names as passed to or retrieved from
+system calls.  In general, the actual encoding will differ from system to
+system, and also on the particular locale that the user is in.  The use
+of the file-name alias effectively hides that implementation detail on
+top of that abstract interface layer which provides a unified set of
+coding systems which are consistent across all operating environments.
+
+The choice of which coding system to use in a particular conversion macro
+requires some thought.  In general, you should choose a lower-level
+actual coding system when the very design of the APIs you are working
+with call for that particular coding system.  In all other cases, you
+should find the least general abstract coding system (i.e. coding system
+alias) that applies to your specific situation.  Only use the most
+general coding systems, such as native, when there is simply nothing else
+that is more appropriate.  By doing things this way, you allow the user
+more control over how the encoding actually works, because the user is
+free to map the abstracted coding system names onto to different actual
+coding systems.
+
+Some common coding systems are:
+
+@table @code
+@item ctext
+Compound Text, which is the standard encoding under X Windows, which is
+used for clipboard data and possibly other data.  (ctext is a coding
+system of type ISO2022.)
+
+@item mswindows-unicode
+this is used for representing text passed to MS Window API calls with
+arguments that need to be in Unicode format.  (mswindows-unicode is a
+coding system of type UTF-16)
+
+@item ms-windows-multi-byte
+this is used for representing text passed to MS Windows API calls with
+arguments that need to be in multi-byte format.  Note that there are
+very few if any examples of such calls.
+
+@item mswindows-tstr
+this is used for representing text passed to any MS Windows API calls
+that declare their argument as LPTSTR, or LPCTSTR.  This is the vast
+majority of system calls and automatically translates either to
+mswindows-unicode or mswindows-multi-byte, depending on the presence or
+absence of the UNICODE preprocessor constant.  (If we compile XEmacs
+with this preprocessor constant, then all API calls use Unicode for all
+text passed to or received from these API calls.)
+
+@item terminal
+used for text sent to or read from a text terminal in the absence of a
+more specific coding system (calls to window-system specific APIs should
+use the appropriate window-specific coding system if it makes sense to
+do so.)
+
+@item file-name
+used when specifying the names of files in the absence of a more
+specific encoding, such as ms-windows-tstr.
+
+@item native
+the most general coding system for specifying text passed to system
+calls.  This generally translates to whatever coding system is specified
+by the current locale.  This should only be used when none of the coding
+systems mentioned above are appropriate.
+@end table
+
+@subheading Proper Display of Multilingual Text
+
+There are two things required to get this working correctly.  One is
+selecting the correct font, and the other is encoding the text according
+to the encoding used for that specific font, or the window-system
+specific text display API.  Generally each separate character set has a
+different font associated with it, which is specified by name and each
+font has an associated encoding into which the characters must be
+translated.  (this is the case on X Windows, at least; on Windows there
+is a more general mechanism).  Both the specific font for a charset and
+the encoding of that font are system dependent.  Currently there is a
+way of specifying these two properties under X Windows (using the
+registry and ccl properties of a character set) but not for other window
+systems.  A more general system needs to be implemented to allow these
+characteristics to be specified for all Windows systems.
+
+Another issue is making sure that the necessary fonts for displaying
+various character sets are installed on the system.  Currently, XEmacs
+provides, on its web site, X Windows fonts for a number of different
+character sets that can be installed by users.  This isn't done yet for
+Windows, but it should be.
+
+@subheading Inputting of Multilingual Text
+
+This is a rather complicated issue because there are many paradigms
+defined for inputting multi-lingual text, some of which are specific to
+particular languages, and any particular language may have many
+different paradigms defined for inputting its text.  These paradigms are
+encoded in input methods and there is a standard API for defining an
+input method in XEmacs called LEIM, or Library of Emacs Input Methods.
+Some of these input methods are written entirely in Elisp, and thus are
+system-independent, while others require the aid either of an external
+process, or of C level support that ties into a particular
+system-specific input method API, for example, XIM under X Windows, or
+the active keyboard layout and IME support under Windows.  Currently,
+there is no support for any system-specific input methods under
+Microsoft Windows, although this will change.
+
+@node Introduction to Multilingual Issues #4, Character Sets, Introduction to Multilingual Issues #3, Multilingual Support
+@section Introduction to Multilingual Issues #4
+@cindex introduction to multilingual issues #4
+
+The rest of the sections in this chapter consist of yet another
+introduction to multilingual issues, duplicating the information in the
+previous sections.
+
+@node Character Sets, Encodings, Introduction to Multilingual Issues #4, Multilingual Support
 @section Character Sets
 @cindex character sets
 
-  A character set (or @dfn{charset}) is an ordered set of characters.  A
-particular character in a charset is indexed using one or more
-@dfn{position codes}, which are non-negative integers.  The number of
-position codes needed to identify a particular character in a charset is
-called the @dfn{dimension} of the charset.  In XEmacs/Mule, all charsets
-have dimension 1 or 2, and the size of all charsets (except for a few
-special cases) is either 94, 96, 94 by 94, or 96 by 96.  The range of
-position codes used to index characters from any of these types of
+  A @dfn{character set} (or @dfn{charset}) is an ordered set of
+characters.  A particular character in a charset is indexed using one or
+more @dfn{position codes}, which are non-negative integers.  The number
+of position codes needed to identify a particular character in a charset
+is called the @dfn{dimension} of the charset.  In XEmacs/Mule, all
+charsets have dimension 1 or 2, and the size of all charsets (except for
+a few special cases) is either 94, 96, 94 by 94, or 96 by 96.  The range
+of position codes used to index characters from any of these types of
 character sets is as follows:
 
 @example
@@ -9190,7 +10573,7 @@
 
   This is a bit ad-hoc but gets the job done.
 
-@node Encodings
+@node Encodings, Internal Mule Encodings, Character Sets, Multilingual Support
 @section Encodings
 @cindex encodings, Mule
 @cindex Mule encodings
@@ -9215,22 +10598,23 @@
 encodings:
 
 @menu
-* Japanese EUC (Extended Unix Code)::
-* JIS7::
+* Japanese EUC (Extended Unix Code)::  
+* JIS7::                        
 @end menu
 
-@node Japanese EUC (Extended Unix Code)
+@node Japanese EUC (Extended Unix Code), JIS7, Encodings, Encodings
 @subsection Japanese EUC (Extended Unix Code)
 @cindex Japanese EUC (Extended Unix Code)
 @cindex EUC (Extended Unix Code), Japanese
 @cindex Extended Unix Code, Japanese EUC
 
-This encompasses the character sets Printing-ASCII, Japanese-JISX0201,
-and Japanese-JISX0208-Kana (half-width katakana, the right half of
-JISX0201).  It uses 8-bit bytes.
-
-Note that Printing-ASCII and Japanese-JISX0201-Kana are 94-character
-charsets, while Japanese-JISX0208 is a 94x94-character charset.
+This encompasses the character sets Printing-ASCII, Katakana-JISX0201
+(half-width katakana, the right half of JISX0201), Japanese-JISX0208,
+and Japanese-JISX0212.
+
+Note that Printing-ASCII and Katakana-JISX0201 are 94-character
+charsets, while Japanese-JISX0208 and Japanese-JISX0212 are
+94x94-character charsets.
 
 The encoding is as follows:
 
@@ -9238,26 +10622,36 @@
 Character set            Representation (PC=position-code)
 -------------            --------------
 Printing-ASCII           PC1
-Japanese-JISX0201-Kana   0x8E       | PC1 + 0x80
+Katakana-JISX0201        0x8E       | PC1 + 0x80
 Japanese-JISX0208        PC1 + 0x80 | PC2 + 0x80
 Japanese-JISX0212        PC1 + 0x80 | PC2 + 0x80
 @end example
 
-
-@node JIS7
+Note that there are other versions of EUC for other Asian languages.
+EUC in general is characterized by
+
+@enumerate
+@item
+row-column encoding,
+@item
+big-endian (row-first) ordering, and
+@item
+ASCII compatibility in variable width forms.
+@end enumerate
+
+@node JIS7,  , Japanese EUC (Extended Unix Code), Encodings
 @subsection JIS7
 @cindex JIS7
 
 This encompasses the character sets Printing-ASCII,
-Japanese-JISX0201-Roman (the left half of JISX0201; this character set
+Latin-JISX0201 (the left half of JISX0201; this character set
 is very similar to Printing-ASCII and is a 94-character charset),
-Japanese-JISX0208, and Japanese-JISX0201-Kana.  It uses 7-bit bytes.
-
-Unlike Japanese EUC, this is a @dfn{modal} encoding, which
-means that there are multiple states that the encoding can
-be in, which affect how the bytes are to be interpreted.
-Special sequences of bytes (called @dfn{escape sequences})
-are used to change states.
+Japanese-JISX0208, and Katakana-JISX0201.  It uses 7-bit bytes.
+
+Unlike EUC, this is a @dfn{modal} encoding, which means that there are
+multiple states that the encoding can be in, which affect how the bytes
+are to be interpreted.  Special sequences of bytes (called @dfn{escape
+sequences}) are used to change states.
 
   The encoding is as follows:
 
@@ -9265,22 +10659,22 @@
 Character set              Representation (PC=position-code)
 -------------              --------------
 Printing-ASCII             PC1
-Japanese-JISX0201-Roman    PC1
-Japanese-JISX0201-Kana     PC1
-Japanese-JISX0208          PC1 PC2
+Latin-JISX0201             PC1
+Katakana-JISX0201          PC1
+Japanese-JISX0208          PC1 | PC2
 
 
 Escape sequence   ASCII equivalent   Meaning
 ---------------   ----------------   -------
-0x1B 0x28 0x4A    ESC ( J            invoke Japanese-JISX0201-Roman
-0x1B 0x28 0x49    ESC ( I            invoke Japanese-JISX0201-Kana
+0x1B 0x28 0x4A    ESC ( J            invoke Latin-JISX0201
+0x1B 0x28 0x49    ESC ( I            invoke Katakana-JISX0201
 0x1B 0x24 0x42    ESC $ B            invoke Japanese-JISX0208
 0x1B 0x28 0x42    ESC ( B            invoke Printing-ASCII
 @end example
 
   Initially, Printing-ASCII is invoked.
 
-@node Internal Mule Encodings
+@node Internal Mule Encodings, Byte/Character Types; Buffer Positions; Other Typedefs, Encodings, Multilingual Support
 @section Internal Mule Encodings
 @cindex internal Mule encodings
 @cindex Mule encodings, internal
@@ -9299,18 +10693,19 @@
   More specifically:
 
 @example
-Character set           Leading byte
--------------           ------------
-ASCII                   0
-Composite               0x80
-Dimension-1 Official    0x81 - 0x8D
-                          (0x8E is free)
-Control-1               0x8F
-Dimension-2 Official    0x90 - 0x99
-                          (0x9A - 0x9D are free;
-                           0x9E and 0x9F are reserved)
-Dimension-1 Private     0xA0 - 0xEF
-Dimension-2 Private     0xF0 - 0xFF
+Character set                Leading byte
+-------------                ------------
+ASCII                        0 (0x7F in arrays indexed by leading byte)
+Composite                    0x8D
+Dimension-1 Official         0x80 - 0x8C/0x8D
+                               (0x8E is free)
+Control                      0x8F
+Dimension-2 Official         0x90 - 0x99
+                               (0x9A - 0x9D are free)
+Dimension-1 Private Marker   0x9E
+Dimension-2 Private Marker   0x9F
+Dimension-1 Private          0xA0 - 0xEF
+Dimension-2 Private          0xF0 - 0xFF
 @end example
 
 There are two internal encodings for characters in XEmacs/Mule.  One is
@@ -9325,11 +10720,11 @@
 followed later by the exact details.)
 
 @menu
-* Internal String Encoding::
-* Internal Character Encoding::
+* Internal String Encoding::    
+* Internal Character Encoding::  
 @end menu
 
-@node Internal String Encoding
+@node Internal String Encoding, Internal Character Encoding, Internal Mule Encodings, Internal Mule Encodings
 @subsection Internal String Encoding
 @cindex internal string encoding
 @cindex string encoding, internal
@@ -9382,7 +10777,7 @@
 Shift-JIS and Big5 (not yet described) satisfy only (2). (All
 non-modal encodings must satisfy (2), in order to be unambiguous.)
 
-@node Internal Character Encoding
+@node Internal Character Encoding,  , Internal String Encoding, Internal Mule Encodings
 @subsection Internal Character Encoding
 @cindex internal character encoding
 @cindex character encoding, internal
@@ -9406,7 +10801,7 @@
    range:                                                   (00 - 7F)
 Control-1                  0               1              PC1
    range:                                                   (00 - 1F)
-Dimension-1 official       0            LB - 0x80         PC1
+Dimension-1 official       0            LB - 0x7F         PC1
    range:                                    (01 - 0D)      (20 - 7F)
 Dimension-1 private        0            LB - 0x80         PC1
    range:                                    (20 - 6F)      (20 - 7F)
@@ -9417,55 +10812,1737 @@
 Composite                 0x1F             ?               ?
 @end example
 
-  Note that character codes 0 - 255 are the same as the ``binary encoding''
-described above.
-
-@node CCL
+Note that character codes 0 - 255 are the same as the ``binary
+encoding'' described above.
+
+Most of the code in XEmacs knows nothing of the representation of a
+character other than that values 0 - 255 represent ASCII, Control 1,
+and Latin 1.
+
+@strong{WARNING WARNING WARNING}: The Boyer-Moore code in
+@file{search.c}, and the code in @code{search_buffer()} that determines
+whether that code can be used, knows that ``field 3'' in a character
+always corresponds to the last byte in the textual representation of the
+character. (This is important because the Boyer-Moore algorithm works by
+looking at the last byte of the search string and &&#### finish this.
+
+@node Byte/Character Types; Buffer Positions; Other Typedefs, Internal Text API's, Internal Mule Encodings, Multilingual Support
+@section Byte/Character Types; Buffer Positions; Other Typedefs
+@cindex byte/character types; buffer positions; other typedefs
+@cindex byte/character types
+@cindex character types
+@cindex buffer positions
+@cindex typedefs, other
+
+@menu
+* Byte Types::                  
+* Different Ways of Seeing Internal Text::  
+* Buffer Positions::            
+* Other Typedefs::              
+* Usage of the Various Representations::  
+* Working With the Various Representations::  
+@end menu
+
+@node Byte Types, Different Ways of Seeing Internal Text, Byte/Character Types; Buffer Positions; Other Typedefs, Byte/Character Types; Buffer Positions; Other Typedefs
+@subsection Byte Types
+@cindex byte types
+
+Stuff pointed to by a char * or unsigned char * will nearly always be
+one of the following types:
+
+@itemize @minus
+@item
+a) [Ibyte] pointer to internally-formatted text
+@item
+b) [Extbyte] pointer to text in some external format, which can be
+             defined as all formats other than the internal one
+@item
+c) [Ascbyte] pure ASCII text
+@item
+d) [Binbyte] binary data that is not meant to be interpreted as text
+@item
+e) [Rawbyte] general data in memory, where we don't care about whether
+             it's text or binary
+@item
+f) [Boolbyte] a zero or a one
+@item
+g) [Bitbyte] a byte used for bit fields
+@item
+h) [Chbyte] null-semantics @code{char *}; used when casting an argument to
+            an external API where the the other types may not be
+            appropriate
+@end itemize
+
+Types (b), (c), (f) and (h) are defined as @code{char}, while the others are
+@code{unsigned char}.  This is for maximum safety (signed characters are
+dangerous to work with) while maintaining as much compatibility with
+external API's and string constants as possible.
+
+We also provide versions of the above types defined with different
+underlying C types, for API compatibility.  These use the following
+prefixes:
+
+@example
+C = plain char, when the base type is unsigned
+U = unsigned
+S = signed
+@end example
+
+(Formerly I had a comment saying that type (e) "should be replaced with
+void *".  However, there are in fact many places where an unsigned char
+* might be used -- e.g. for ease in pointer computation, since void *
+doesn't allow this, and for compatibility with external API's.)
+
+Note that these typedefs are purely for documentation purposes; from
+the C code's perspective, they are exactly equivalent to @code{char *},
+@code{unsigned char *}, etc., so you can freely use them with library
+functions declared as such.
+
+Using these more specific types rather than the general ones helps avoid
+the confusions that occur when the semantics of a char * or unsigned
+char * argument being studied are unclear.  Furthermore, by requiring
+that ALL uses of @code{char} be replaced with some other type as part of the
+Mule-ization process, we can use a search for @code{char} as a way of finding
+code that has not been properly Mule-ized yet.
+
+@node Different Ways of Seeing Internal Text, Buffer Positions, Byte Types, Byte/Character Types; Buffer Positions; Other Typedefs
+@subsection Different Ways of Seeing Internal Text
+@cindex different ways of seeing internal text
+
+There are various ways of representing internal text.  The two primary
+ways are as an "array" of individual characters; the other is as a
+"stream" of bytes.  In the ASCII world, where there are only 255
+characters at most, things are easy because each character fits into a
+byte.  In general, however, this is not true -- see the above discussion
+of characters vs. encodings.
+
+In some cases, it's also important to distinguish between a stream
+representation as a series of bytes and as a series of textual units.
+This is particularly important wrt Unicode.  The UTF-16 representation
+(sometimes referred to, rather sloppily, as simply the "Unicode" format)
+represents text as a series of 16-bit units.  Mostly, each unit
+corresponds to a single character, but not necessarily, as characters
+outside of the range 0-65535 (the BMP or "Basic Multilingual Plane" of
+Unicode) require two 16-bit units, through the mechanism of
+"surrogates".  When a series of 16-bit units is serialized into a byte
+stream, there are at least two possible representations, little-endian
+and big-endian, and which one is used may depend on the native format of
+16-bit integers in the CPU of the machine that XEmacs is running
+on. (Similarly, UTF-32 is logically a representation with 32-bit textual
+units.)
+
+Specifically:
+
+@itemize @minus
+@item
+UTF-8 has 1-byte (8-bit) units.
+@item
+UTF-16 has 2-byte (16-bit) units.
+@item
+UTF-32 has 4-byte (32-bit) units.
+@item
+XEmacs-internal encoding (the old "Mule" encoding) has 1-byte (8-bit)
+units.
+@item
+UTF-7 technically has 7-bit units that are within the "mail-safe" range
+(ASCII 32 - 126 plus a few control characters), but normally is encoded
+in an 8-bit stream. (UTF-7 is also a modal encoding, since it has a
+normal mode where printable ASCII characters represent themselves and a
+shifted mode, introduced with a plus sign, where a base-64 encoding is
+used.)
+@item
+UTF-5 technically has 7-bit units (normally encoded in an 8-bit stream,
+like UTF-7), but only uses uppercase A-V and 0-9, and only encodes 4
+bits worth of data per character.  UTF-5 is meant for encoding Unicode
+inside of DNS names.
+@end itemize
+
+Thus, we can imagine three levels in the representation of texual data:
+
+@example
+series of characters -> series of textual units -> series of bytes
+       [Ichar]                 [Itext]                 [Ibyte]
+@end example
+
+XEmacs has three corresponding typedefs:
+
+@itemize @minus
+@item
+An Ichar is an integer (at least 32-bit), representing a 31-bit
+character.
+@item
+An Itext is an unsigned value, either 8, 16 or 32 bits, depending
+on the nature of the internal representation, and corresponding to
+a single textual unit.
+@item
+An Ibyte is an @code{unsigned char}, representing a single byte in a
+textual byte stream.
+@end itemize
+
+Internal text in stream format can be simultaneously viewed as either
+@code{Itext *} or @code{Ibyte *}.  The @code{Ibyte *} representation is convenient for
+copying data from one place to another, because such routines usually
+expect byte counts.  However, @code{Itext *} is much better for actually
+working with the data.
+
+From a text-unit perspective, units 0 through 127 will always be ASCII
+compatible, and data in Lisp strings (and other textual data generated
+as a whole, e.g. from external conversion) will be followed by a
+null-unit terminator.  From an @code{Ibyte *} perspective, however, the
+encoding is only ASCII-compatible if it uses 1-byte units.
+
+Similarly to the different text representations, three integral count
+types exist -- Charcount, Textcount and Bytecount.
+
+NOTE: Despite the presence of the terminator, internal text itself can
+have nulls in it! (Null text units, not just the null bytes present in
+any UTF-16 encoding.) The terminator is present because in many cases
+internal text is passed to routines that will ultimately pass the text
+to library functions that cannot handle embedded nulls, e.g. functions
+manipulating filenames, and it is a real hassle to have to pass the
+length around constantly.  But this can lead to sloppy coding!  We need
+to be careful about watching for nulls in places that are important,
+e.g. manipulating string objects or passing data to/from the clipboard.
+
+@table @code
+@item Ibyte
+The data in a buffer or string is logically made up of Ibyte objects,
+where a Ibyte takes up the same amount of space as a char. (It is
+declared differently, though, to catch invalid usages.) Strings stored
+using Ibytes are said to be in "internal format".  The important
+characteristics of internal format are
+
+@itemize @minus
+@item
+ASCII characters are represented as a single Ibyte, in the range 0 -
+0x7f.
+@item
+All other characters are represented as a Ibyte in the range 0x80 - 0x9f
+followed by one or more Ibytes in the range 0xa0 to 0xff.
+@end itemize
+
+This leads to a number of desirable properties:
+
+@itemize @minus
+@item
+Given the position of the beginning of a character, you can find the
+beginning of the next or previous character in constant time.
+@item
+When searching for a substring or an ASCII character within the string,
+you need merely use standard searching routines.
+@end itemize
+
+@item Itext
+
+#### Document me.
+
+@item Ichar
+This typedef represents a single Emacs character, which can be ASCII,
+ISO-8859, or some extended character, as would typically be used for
+Kanji.  Note that the representation of a character as an Ichar is @strong{not}
+the same as the representation of that same character in a string; thus,
+you cannot do the standard C trick of passing a pointer to a character
+to a function that expects a string.
+
+An Ichar takes up 19 bits of representation and (for code compatibility
+and such) is compatible with an int.  This representation is visible on
+the Lisp level.  The important characteristics of the Ichar
+representation are
+
+@itemize @minus
+@item
+values 0x00 - 0x7f represent ASCII.
+@item
+values 0x80 - 0xff represent the right half of ISO-8859-1.
+@item
+values 0x100 and up represent all other characters.
+@end itemize
+
+This means that Ichar values are upwardly compatible with the standard
+8-bit representation of ASCII/ISO-8859-1.
+
+@item Extbyte
+Strings that go in or out of Emacs are in "external format", typedef'ed
+as an array of char or a char *.  There is more than one external format
+(JIS, EUC, etc.) but they all have similar properties.  They are modal
+encodings, which is to say that the meaning of particular bytes is not
+fixed but depends on what "mode" the string is currently in (e.g. bytes
+in the range 0 - 0x7f might be interpreted as ASCII, or as Hiragana, or
+as 2-byte Kanji, depending on the current mode).  The mode starts out in
+ASCII/ISO-8859-1 and is switched using escape sequences -- for example,
+in the JIS encoding, 'ESC $ B' switches to a mode where pairs of bytes
+in the range 0 - 0x7f are interpreted as Kanji characters.
+
+External-formatted data is generally desirable for passing data between
+programs because it is upwardly compatible with standard
+ASCII/ISO-8859-1 strings and may require less space than internal
+encodings such as the one described above.  In addition, some encodings
+(e.g. JIS) keep all characters (except the ESC used to switch modes) in
+the printing ASCII range 0x20 - 0x7e, which results in a much higher
+probability that the data will avoid being garbled in transmission.
+Externally-formatted data is generally not very convenient to work with,
+however, and for this reason is usually converted to internal format
+before any work is done on the string.
+
+NOTE: filenames need to be in external format so that ISO-8859-1
+characters come out correctly.
+@end table
+
+@node Buffer Positions, Other Typedefs, Different Ways of Seeing Internal Text, Byte/Character Types; Buffer Positions; Other Typedefs
+@subsection Buffer Positions
+@cindex buffer positions
+
+There are three possible ways to specify positions in a buffer.  All
+of these are one-based: the beginning of the buffer is position or
+index 1, and 0 is not a valid position.
+
+As a "buffer position" (typedef Charbpos):
+
+   This is an index specifying an offset in characters from the
+   beginning of the buffer.  Note that buffer positions are
+   logically @strong{between} characters, not on a character.  The
+   difference between two buffer positions specifies the number of
+   characters between those positions.  Buffer positions are the
+   only kind of position externally visible to the user.
+
+As a "byte index" (typedef Bytebpos):
+
+   This is an index over the bytes used to represent the characters
+   in the buffer.  If there is no Mule support, this is identical
+   to a buffer position, because each character is represented
+   using one byte.  However, with Mule support, many characters
+   require two or more bytes for their representation, and so a
+   byte index may be greater than the corresponding buffer
+   position.
+
+As a "memory index" (typedef Membpos):
+
+   This is the byte index adjusted for the gap.  For positions
+   before the gap, this is identical to the byte index.  For
+   positions after the gap, this is the byte index plus the gap
+   size.  There are two possible memory indices for the gap
+   position; the memory index at the beginning of the gap should
+   always be used, except in code that deals with manipulating the
+   gap, where both indices may be seen.  The address of the
+   character "at" (i.e. following) a particular position can be
+   obtained from the formula
+
+     buffer_start_address + memory_index(position) - 1
+
+   except in the case of characters at the gap position.
+
+@node Other Typedefs, Usage of the Various Representations, Buffer Positions, Byte/Character Types; Buffer Positions; Other Typedefs
+@subsection Other Typedefs
+@cindex other typedefs
+
+   Charcount:
+   ----------
+     This typedef represents a count of characters, such as
+     a character offset into a string or the number of
+     characters between two positions in a buffer.  The
+     difference between two Charbpos's is a Charcount, and
+     character positions in a string are represented using
+     a Charcount.
+
+   Textcount:
+   ----------
+     #### Document me.
+
+   Bytecount:
+   ----------
+     Similar to a Charcount but represents a count of bytes.
+     The difference between two Bytebpos's is a Bytecount.
+
+
+@node Usage of the Various Representations, Working With the Various Representations, Other Typedefs, Byte/Character Types; Buffer Positions; Other Typedefs
+@subsection Usage of the Various Representations
+@cindex usage of the various representations
+
+Memory indices are used in low-level functions in insdel.c and for
+extent endpoints and marker positions.  The reason for this is that
+this way, the extents and markers don't need to be updated for most
+insertions, which merely shrink the gap and don't move any
+characters around in memory.
+
+(The beginning-of-gap memory index simplifies insertions w.r.t.
+markers, because text usually gets inserted after markers.  For
+extents, it is merely for consistency, because text can get
+inserted either before or after an extent's endpoint depending on
+the open/closedness of the endpoint.)
+
+Byte indices are used in other code that needs to be fast,
+such as the searching, redisplay, and extent-manipulation code.
+
+Buffer positions are used in all other code.  This is because this
+representation is easiest to work with (especially since Lisp
+code always uses buffer positions), necessitates the fewest
+changes to existing code, and is the safest (e.g. if the text gets
+shifted underneath a buffer position, it will still point to a
+character; if text is shifted under a byte index, it might point
+to the middle of a character, which would be bad).
+
+Similarly, Charcounts are used in all code that deals with strings
+except for code that needs to be fast, which used Bytecounts.
+
+Strings are always passed around internally using internal format.
+Conversions between external format are performed at the time
+that the data goes in or out of Emacs.
+
+@node Working With the Various Representations,  , Usage of the Various Representations, Byte/Character Types; Buffer Positions; Other Typedefs
+@subsection Working With the Various Representations
+@cindex working with the various representations
+
+We write things this way because it's very important the
+MAX_BYTEBPOS_GAP_SIZE_3 is a multiple of 3. (As it happens,
+65535 is a multiple of 3, but this may not always be the
+case. #### unfinished
+
+@node Internal Text API's, Coding for Mule, Byte/Character Types; Buffer Positions; Other Typedefs, Multilingual Support
+@section Internal Text API's
+@cindex internal text API's
+@cindex text API's, internal
+@cindex API's, text, internal
+
+@strong{NOTE}: The most current documentation for these API's is in
+@file{text.h}.  In case of error, assume that file is correct and this
+one wrong.
+
+@menu
+* Basic internal-format API's::  
+* The DFC API::                 
+* The Eistring API::            
+@end menu
+
+@node Basic internal-format API's, The DFC API, Internal Text API's, Internal Text API's
+@subsection Basic internal-format API's
+@cindex basic internal-format API's
+@cindex internal-format API's, basic
+@cindex API's, basic internal-format
+
+These are simple functions and macros to convert between text
+representation and characters, move forward and back in text, etc.
+
+#### Finish the rest of this.
+
+Use the following functions/macros on contiguous text in any of the
+internal formats.  Those that take a format arg work on all internal
+formats; the others work only on the default (variable-width under Mule)
+format.  If the text you're operating on is known to come from a buffer,
+use the buffer-level functions in buffer.h, which automatically know the
+correct format and handle the gap.
+
+Some terminology:
+
+"itext" appearing in the macros means "internal-format text" -- type
+@code{Ibyte *}.  Operations on such pointers themselves, rather than on the
+text being pointed to, have "itext" instead of "itext" in the macro
+name.  "ichar" in the macro names means an Ichar -- the representation
+of a character as a single integer rather than a series of bytes, as part
+of "itext".  Many of the macros below are for converting between the
+two representations of characters.
+
+Note also that we try to consistently distinguish between an "Ichar" and
+a Lisp character.  Stuff working with Lisp characters often just says
+"char", so we consistently use "Ichar" when that's what we're working
+with.
+
+@node The DFC API, The Eistring API, Basic internal-format API's, Internal Text API's
+@subsection The DFC API
+@cindex DFC API
+@cindex API, DFC
+
+This is for conversion between internal and external text.  Note that
+there is also the "new DFC" API, which @strong{returns} a pointer to the
+converted text (in alloca space), rather than storing it into a
+variable.
+
+The macros below are used for converting data between different formats.
+Generally, the data is textual, and the formats are related to
+internationalization (e.g. converting between internal-format text and
+UTF-8) -- but the mechanism is general, and could be used for anything,
+e.g. decoding gzipped data.
+
+In general, conversion involves a source of data, a sink, the existing
+format of the source data, and the desired format of the sink.  The
+macros below, however, always require that either the source or sink is
+internal-format text.  Therefore, in practice the conversions below
+involve source, sink, an external format (specified by a coding system),
+and the direction of conversion (internal->external or vice-versa).
+
+Sources and sinks can be raw data (sized or unsized -- when unsized,
+input data is assumed to be null-terminated [double null-terminated for
+Unicode-format data], and on output the length is not stored anywhere),
+Lisp strings, Lisp buffers, lstreams, and opaque data objects.  When the
+output is raw data, the result can be allocated either with @code{alloca()} or
+@code{malloc()}. (There is currently no provision for writing into a fixed
+buffer.  If you want this, use @code{alloca()} output and then copy the data --
+but be careful with the size!  Unless you are very sure of the encoding
+being used, upper bounds for the size are not in general computable.)
+The obvious restrictions on source and sink types apply (e.g. Lisp
+strings are a source and sink only for internal data).
+
+All raw data outputted will contain an extra null byte (two bytes for
+Unicode -- currently, in fact, all output data, whether internal or
+external, is double-null-terminated, but you can't count on this; see
+below).  This means that enough space is allocated to contain the extra
+nulls; however, these nulls are not reflected in the returned output
+size.
+
+The most basic macros are TO_EXTERNAL_FORMAT and TO_INTERNAL_FORMAT.
+These can be used to convert between any kinds of sources or sinks.
+However, 99% of conversions involve raw data or Lisp strings as both
+source and sink, and usually data is output as @code{alloca()} rather than
+@code{malloc()}.  For this reason, convenience macros are defined for many types
+of conversions involving raw data and/or Lisp strings, especially when
+the output is an @code{alloca()}ed string. (When the destination is a
+Lisp_String, there are other functions that should be used instead --
+@code{build_ext_string()} and @code{make_ext_string()}, for example.) The convenience
+macros are of two types -- the older kind that store the result into a
+specified variable, and the newer kind that return the result.  The newer
+kind of macros don't exist when the output is sized data, because that
+would have two return values.  NOTE: All convenience macros are
+ultimately defined in terms of TO_EXTERNAL_FORMAT and TO_INTERNAL_FORMAT.
+Thus, any comments below about the workings of these macros also apply to
+all convenience macros.
+
+@example
+TO_EXTERNAL_FORMAT (source_type, source, sink_type, sink, codesys)
+TO_INTERNAL_FORMAT (source_type, source, sink_type, sink, codesys)
+@end example
+
+Typical use is
+
+@example
+   TO_EXTERNAL_FORMAT (LISP_STRING, str, C_STRING_MALLOC, ptr, Qfile_name);
+@end example
+
+which means that the contents of the lisp string @var{str} are written
+to a malloc'ed memory area which will be pointed to by @var{ptr}, after the
+function returns.  The conversion will be done using the @code{file-name}
+coding system (which will be controlled by the user indirectly by
+setting or binding the variable @code{file-name-coding-system}).
+
+Some sources and sinks require two C variables to specify.  We use
+some preprocessor magic to allow different source and sink types, and
+even different numbers of arguments to specify different types of
+sources and sinks.
+
+So we can have a call that looks like
+
+@example
+   TO_INTERNAL_FORMAT (DATA, (ptr, len),
+                       MALLOC, (ptr, len),
+                       coding_system);
+@end example
+
+The parenthesized argument pairs are required to make the
+preprocessor magic work.
+
+NOTE: GC is inhibited during the entire operation of these macros.  This
+is because frequently the data to be converted comes from strings but
+gets passed in as just DATA, and GC may move around the string data.  If
+we didn't inhibit GC, there'd have to be a lot of messy recoding,
+alloca-copying of strings and other annoying stuff.
+      	      
+The source or sink can be specified in one of these ways:
+
+@example
+DATA,   (ptr, len),    // input data is a fixed buffer of size len
+ALLOCA, (ptr, len),    // output data is in a @code{ALLOCA()}ed buffer of size len
+MALLOC, (ptr, len),    // output data is in a @code{malloc()}ed buffer of size len
+C_STRING_ALLOCA, ptr,  // equivalent to ALLOCA (ptr, len_ignored) on output
+C_STRING_MALLOC, ptr,  // equivalent to MALLOC (ptr, len_ignored) on output
+C_STRING,     ptr,     // equivalent to DATA, (ptr, strlen/wcslen (ptr))
+                       // on input (the Unicode version is used when correct)
+LISP_STRING,  string,  // input or output is a Lisp_Object of type string
+LISP_BUFFER,  buffer,  // output is written to (point) in lisp buffer
+LISP_LSTREAM, lstream, // input or output is a Lisp_Object of type lstream
+LISP_OPAQUE,  object,  // input or output is a Lisp_Object of type opaque
+@end example
+
+When specifying the sink, use lvalues, since the macro will assign to them,
+except when the sink is an lstream or a lisp buffer.
+
+For the sink types @code{ALLOCA} and @code{C_STRING_ALLOCA}, the resulting text is
+stored in a stack-allocated buffer, which is automatically freed on
+returning from the function.  However, the sink types @code{MALLOC} and
+@code{C_STRING_MALLOC} return @code{xmalloc()}ed memory.  The caller is responsible
+for freeing this memory using @code{xfree()}.
+
+The macros accept the kinds of sources and sinks appropriate for
+internal and external data representation.  See the type_checking_assert
+macros below for the actual allowed types.
+
+Since some sources and sinks use one argument (a Lisp_Object) to
+specify them, while others take a (pointer, length) pair, we use
+some C preprocessor trickery to allow pair arguments to be specified
+by parenthesizing them, as in the examples above.
+
+Anything prefixed by dfc_ (`data format conversion') is private.
+They are only used to implement these macros.
+
+[[Using C_STRING* is appropriate for using with external APIs that
+take null-terminated strings.  For internal data, we should try to
+be '\0'-clean - i.e. allow arbitrary data to contain embedded '\0'.
+
+Sometime in the future we might allow output to C_STRING_ALLOCA or
+C_STRING_MALLOC _only_ with @code{TO_EXTERNAL_FORMAT()}, not
+@code{TO_INTERNAL_FORMAT()}.]]
+
+The above comments are not true.  Frequently (most of the time, in
+fact), external strings come as zero-terminated entities, where the
+zero-termination is the only way to find out the length.  Even in
+cases where you can get the length, most of the time the system will
+still use the null to signal the end of the string, and there will
+still be no way to either send in or receive a string with embedded
+nulls.  In such situations, it's pointless to track the length
+because null bytes can never be in the string.  We have a lot of
+operations that make it easy to operate on zero-terminated strings,
+and forcing the user the deal with the length everywhere would only
+make the code uglier and more complicated, for no gain. --ben
+
+There is no problem using the same lvalue for source and sink.
+
+Also, when pointers are required, the code (currently at least) is
+lax and allows any pointer types, either in the source or the sink.
+This makes it possible, e.g., to deal with internal format data held
+in char *'s or external format data held in WCHAR * (i.e. Unicode).
+
+Finally, whenever storage allocation is called for, extra space is
+allocated for a terminating zero, and such a zero is stored in the
+appropriate place, regardless of whether the source data was
+specified using a length or was specified as zero-terminated.  This
+allows you to freely pass the resulting data, no matter how
+obtained, to a routine that expects zero termination (modulo, of
+course, that any embedded zeros in the resulting text will cause
+truncation).  In fact, currently two embedded zeros are allocated
+and stored after the data result.  This is to allow for the
+possibility of storing a Unicode value on output, which needs the
+two zeros.  Currently, however, the two zeros are stored regardless
+of whether the conversion is internal or external and regardless of
+whether the external coding system is in fact Unicode.  This
+behavior may change in the future, and you cannot rely on this --
+the most you can rely on is that sink data in Unicode format will
+have two terminating nulls, which combine to form one Unicode null
+character.
+
+NOTE: You might ask, why are these not written as functions that
+@strong{RETURN} the converted string, since that would allow them to be used
+much more conveniently, without having to constantly declare temporary
+variables?  The answer is that in fact I originally did write the
+routines that way, but that required either
+
+@itemize @bullet
+@item
+(a) calling @code{alloca()} inside of a function call, or
+@item
+(b) using expressions separated by commas and a global temporary variable, or
+@item
+(c) using the GCC extension (@{ ... @}).
+@end itemize
+
+Turned out that all of the above had bugs, all caused by GCC (hence the
+comments about "those GCC wankers" and "ream gcc up the ass").  As for
+(a), some versions of GCC (especially on Intel platforms), which had
+buggy implementations of @code{alloca()} that couldn't handle being called
+inside of a function call -- they just decremented the stack right in the
+middle of pushing args.  Oops, crash with stack trashing, very bad.  (b)
+was an attempt to fix (a), and that led to further GCC crashes, esp. when
+you had two such calls in a single subexpression, because GCC couldn't be
+counted upon to follow even a minimally reasonable order of execution.
+True, you can't count on one argument being evaluated before another, but
+GCC would actually interleave them so that the temp var got stomped on by
+one while the other was accessing it.  So I tried (c), which was
+problematic because that GCC extension has more bugs in it than a
+termite's nest.
+
+So reluctantly I converted to the current way.  Now, that was awhile ago
+(c. 1994), and it appears that the bug involving alloca in function calls
+has long since been fixed.  More recently, I defined the new-dfc routines
+down below, which DO allow exactly such convenience of returning your
+args rather than store them in temp variables, and I also wrote a
+configure check to see whether @code{alloca()} causes crashes inside of function
+calls, and if so use the portable @code{alloca()} implementation in alloca.c.
+If you define TEST_NEW_DFC, the old routines get written in terms of the
+new ones, and I've had a beta put out with this on and it appeared to
+this appears to cause no problems -- so we should consider
+switching, and feel no compunctions about writing further such function-
+like @code{alloca()} routines in lieu of statement-like ones. --ben
+
+@node The Eistring API,  , The DFC API, Internal Text API's
+@subsection The Eistring API
+@cindex Eistring API
+@cindex API, Eistring
+
+(This API is currently under-used) When doing simple things with
+internal text, the basic internal-format API's are enough.  But to do
+things like delete or replace a substring, concatenate various strings,
+etc. is difficult to do cleanly because of the allocation issues.
+The Eistring API is designed to deal with this, and provides a clean
+way of modifying and building up internal text. (Note that the former
+lack of this API has meant that some code uses Lisp strings to do
+similar manipulations, resulting in excess garbage and increased
+garbage collection.)
+
+NOTE: The Eistring API is (or should be) Mule-correct even without
+an ASCII-compatible internal representation.
+
+@example
+#### NOTE: This is a work in progress.  Neither the API nor especially
+the implementation is finished.
+
+NOTE: An Eistring is a structure that makes it easy to work with
+internally-formatted strings of data.  It provides operations similar
+in feel to the standard @code{strcpy()}, @code{strcat()}, @code{strlen()}, etc., but
+
+(a) it is Mule-correct
+(b) it does dynamic allocation so you never have to worry about size
+    restrictions
+(c) it comes in an @code{ALLOCA()} variety (all allocation is stack-local,
+    so there is no need to explicitly clean up) as well as a @code{malloc()}
+    variety
+(d) it knows its own length, so it does not suffer from standard null
+    byte brain-damage -- but it null-terminates the data anyway, so
+    it can be passed to standard routines
+(e) it provides a much more powerful set of operations and knows about
+    all the standard places where string data might reside: Lisp_Objects,
+    other Eistrings, Ibyte * data with or without an explicit length,
+    ASCII strings, Ichars, etc.
+(f) it provides easy operations to convert to/from externally-formatted
+    data, and is easier to use than the standard TO_INTERNAL_FORMAT
+    and TO_EXTERNAL_FORMAT macros. (An Eistring can store both the internal
+    and external version of its data, but the external version is only
+    initialized or changed when you call @code{eito_external()}.)
+
+The idea is to make it as easy to write Mule-correct string manipulation
+code as it is to write normal string manipulation code.  We also make
+the API sufficiently general that it can handle multiple internal data
+formats (e.g. some fixed-width optimizing formats and a default variable
+width format) and allows for @strong{ANY} data format we might choose in the
+future for the default format, including UCS2. (In other words, we can't
+assume that the internal format is ASCII-compatible and we can't assume
+it doesn't have embedded null bytes.  We do assume, however, that any
+chosen format will have the concept of null-termination.) All of this is
+hidden from the user.
+
+#### It is really too bad that we don't have a real object-oriented
+language, or at least a language with polymorphism!
+
+
+ ********************************************** 
+ *                 Declaration                * 
+ ********************************************** 
+
+To declare an Eistring, either put one of the following in the local
+variable section:
+
+DECLARE_EISTRING (name);
+     Declare a new Eistring and initialize it to the empy string.  This
+     is a standard local variable declaration and can go anywhere in the
+     variable declaration section.  NAME itself is declared as an
+     Eistring *, and its storage declared on the stack.
+
+DECLARE_EISTRING_MALLOC (name);
+     Declare and initialize a new Eistring, which uses @code{malloc()}ed
+     instead of @code{ALLOCA()}ed data.  This is a standard local variable
+     declaration and can go anywhere in the variable declaration
+     section.  Once you initialize the Eistring, you will have to free
+     it using @code{eifree()} to avoid memory leaks.  You will need to use this
+     form if you are passing an Eistring to any function that modifies
+     it (otherwise, the modified data may be in stack space and get
+     overwritten when the function returns).
+
+or use
+
+Eistring ei;
+void eiinit (Eistring *ei);
+void eiinit_malloc (Eistring *einame);
+     If you need to put an Eistring elsewhere than in a local variable
+     declaration (e.g. in a structure), declare it as shown and then
+     call one of the init macros.
+
+Also note:
+
+void eifree (Eistring *ei);
+     If you declared an Eistring to use @code{malloc()} to hold its data,
+     or converted it to the heap using @code{eito_malloc()}, then this
+     releases any data in it and afterwards resets the Eistring
+     using @code{eiinit_malloc()}.  Otherwise, it just resets the Eistring
+     using @code{eiinit()}.
+
+
+ ********************************************** 
+ *                 Conventions                * 
+ ********************************************** 
+
+ - The names of the functions have been chosen, where possible, to
+   match the names of @code{str*()} functions in the standard C API.
+ - 
+
+
+ ********************************************** 
+ *               Initialization               * 
+ ********************************************** 
+
+void eireset (Eistring *eistr);
+     Initialize the Eistring to the empty string.
+
+void eicpy_* (Eistring *eistr, ...);
+     Initialize the Eistring from somewhere:
+
+void eicpy_ei (Eistring *eistr, Eistring *eistr2);
+     ... from another Eistring.
+void eicpy_lstr (Eistring *eistr, Lisp_Object lisp_string);
+     ... from a Lisp_Object string.
+void eicpy_ch (Eistring *eistr, Ichar ch);
+     ... from an Ichar (this can be a conventional C character).
+
+void eicpy_lstr_off (Eistring *eistr, Lisp_Object lisp_string,
+                     Bytecount off, Charcount charoff,
+                     Bytecount len, Charcount charlen);
+     ... from a section of a Lisp_Object string.
+void eicpy_lbuf (Eistring *eistr, Lisp_Object lisp_buf,
+     	    Bytecount off, Charcount charoff,
+     	    Bytecount len, Charcount charlen);
+     ... from a section of a Lisp_Object buffer.
+void eicpy_raw (Eistring *eistr, const Ibyte *data, Bytecount len);
+     ... from raw internal-format data in the default internal format.
+void eicpy_rawz (Eistring *eistr, const Ibyte *data);
+     ... from raw internal-format data in the default internal format
+     that is "null-terminated" (the meaning of this depends on the nature
+     of the default internal format).
+void eicpy_raw_fmt (Eistring *eistr, const Ibyte *data, Bytecount len,
+                    Internal_Format intfmt, Lisp_Object object);
+     ... from raw internal-format data in the specified format.
+void eicpy_rawz_fmt (Eistring *eistr, const Ibyte *data,
+                     Internal_Format intfmt, Lisp_Object object);
+     ... from raw internal-format data in the specified format that is
+     "null-terminated" (the meaning of this depends on the nature of
+     the specific format).
+void eicpy_c (Eistring *eistr, const Ascbyte *c_string);
+     ... from an ASCII null-terminated string.  Non-ASCII characters in
+     the string are @strong{ILLEGAL} (read @code{abort()} with error-checking defined).
+void eicpy_c_len (Eistring *eistr, const Ascbyte *c_string, len);
+     ... from an ASCII string, with length specified.  Non-ASCII characters
+     in the string are @strong{ILLEGAL} (read @code{abort()} with error-checking defined).
+void eicpy_ext (Eistring *eistr, const Extbyte *extdata,
+                Lisp_Object codesys);
+     ... from external null-terminated data, with coding system specified.
+void eicpy_ext_len (Eistring *eistr, const Extbyte *extdata,
+                    Bytecount extlen, Lisp_Object codesys);
+     ... from external data, with length and coding system specified.
+void eicpy_lstream (Eistring *eistr, Lisp_Object lstream);
+     ... from an lstream; reads data till eof.  Data must be in default
+     internal format; otherwise, interpose a decoding lstream.
+
+
+ ********************************************** 
+ *    Getting the data out of the Eistring    * 
+ ********************************************** 
+
+Ibyte *eidata (Eistring *eistr);
+     Return a pointer to the raw data in an Eistring.  This is NOT
+     a copy.
+
+Lisp_Object eimake_string (Eistring *eistr);
+     Make a Lisp string out of the Eistring.
+
+Lisp_Object eimake_string_off (Eistring *eistr,
+                               Bytecount off, Charcount charoff,
+     			  Bytecount len, Charcount charlen);
+     Make a Lisp string out of a section of the Eistring.
+
+void eicpyout_alloca (Eistring *eistr, LVALUE: Ibyte *ptr_out,
+                      LVALUE: Bytecount len_out);
+     Make an @code{ALLOCA()} copy of the data in the Eistring, using the
+     default internal format.  Due to the nature of @code{ALLOCA()}, this
+     must be a macro, with all lvalues passed in as parameters.
+     (More specifically, not all compilers correctly handle using
+     @code{ALLOCA()} as the argument to a function call -- GCC on x86
+     didn't used to, for example.) A pointer to the @code{ALLOCA()}ed data
+     is stored in PTR_OUT, and the length of the data (not including
+     the terminating zero) is stored in LEN_OUT.
+
+void eicpyout_alloca_fmt (Eistring *eistr, LVALUE: Ibyte *ptr_out,
+                          LVALUE: Bytecount len_out,
+                          Internal_Format intfmt, Lisp_Object object);
+     Like @code{eicpyout_alloca()}, but converts to the specified internal
+     format. (No formats other than FORMAT_DEFAULT are currently
+     implemented, and you get an assertion failure if you try.)
+
+Ibyte *eicpyout_malloc (Eistring *eistr, Bytecount *intlen_out);
+     Make a @code{malloc()} copy of the data in the Eistring, using the
+     default internal format.  This is a real function.  No lvalues
+     passed in.  Returns the new data, and stores the length (not
+     including the terminating zero) using INTLEN_OUT, unless it's
+     a NULL pointer.
+
+Ibyte *eicpyout_malloc_fmt (Eistring *eistr, Internal_Format intfmt,
+                              Bytecount *intlen_out, Lisp_Object object);
+     Like @code{eicpyout_malloc()}, but converts to the specified internal
+     format. (No formats other than FORMAT_DEFAULT are currently
+     implemented, and you get an assertion failure if you try.)
+
+
+ ********************************************** 
+ *             Moving to the heap             * 
+ ********************************************** 
+
+void eito_malloc (Eistring *eistr);
+     Move this Eistring to the heap.  Its data will be stored in a
+     @code{malloc()}ed block rather than the stack.  Subsequent changes to
+     this Eistring will @code{realloc()} the block as necessary.  Use this
+     when you want the Eistring to remain in scope past the end of
+     this function call.  You will have to manually free the data
+     in the Eistring using @code{eifree()}.
+
+void eito_alloca (Eistring *eistr);
+     Move this Eistring back to the stack, if it was moved to the
+     heap with @code{eito_malloc()}.  This will automatically free any
+     heap-allocated data.
+
+
+
+ ********************************************** 
+ *            Retrieving the length           * 
+ ********************************************** 
+
+Bytecount eilen (Eistring *eistr);
+     Return the length of the internal data, in bytes.  See also
+     @code{eiextlen()}, below.
+Charcount eicharlen (Eistring *eistr);
+     Return the length of the internal data, in characters.
+
+
+ ********************************************** 
+ *           Working with positions           * 
+ ********************************************** 
+
+Bytecount eicharpos_to_bytepos (Eistring *eistr, Charcount charpos);
+     Convert a char offset to a byte offset.
+Charcount eibytepos_to_charpos (Eistring *eistr, Bytecount bytepos);
+     Convert a byte offset to a char offset.
+Bytecount eiincpos (Eistring *eistr, Bytecount bytepos);
+     Increment the given position by one character.
+Bytecount eiincpos_n (Eistring *eistr, Bytecount bytepos, Charcount n);
+     Increment the given position by N characters.
+Bytecount eidecpos (Eistring *eistr, Bytecount bytepos);
+     Decrement the given position by one character.
+Bytecount eidecpos_n (Eistring *eistr, Bytecount bytepos, Charcount n);
+     Deccrement the given position by N characters.
+
+
+ ********************************************** 
+ *    Getting the character at a position     * 
+ ********************************************** 
+
+Ichar eigetch (Eistring *eistr, Bytecount bytepos);
+     Return the character at a particular byte offset.
+Ichar eigetch_char (Eistring *eistr, Charcount charpos);
+     Return the character at a particular character offset.
+
+
+ ********************************************** 
+ *    Setting the character at a position     * 
+ ********************************************** 
+
+Ichar eisetch (Eistring *eistr, Bytecount bytepos, Ichar chr);
+     Set the character at a particular byte offset.
+Ichar eisetch_char (Eistring *eistr, Charcount charpos, Ichar chr);
+     Set the character at a particular character offset.
+
+
+ ********************************************** 
+ *               Concatenation                * 
+ ********************************************** 
+
+void eicat_* (Eistring *eistr, ...);
+     Concatenate onto the end of the Eistring, with data coming from the
+     same places as above:
+
+void eicat_ei (Eistring *eistr, Eistring *eistr2);
+     ... from another Eistring.
+void eicat_c (Eistring *eistr, Ascbyte *c_string);
+     ... from an ASCII null-terminated string.  Non-ASCII characters in
+     the string are @strong{ILLEGAL} (read @code{abort()} with error-checking defined).
+void eicat_raw (ei, const Ibyte *data, Bytecount len);
+     ... from raw internal-format data in the default internal format.
+void eicat_rawz (ei, const Ibyte *data);
+     ... from raw internal-format data in the default internal format
+     that is "null-terminated" (the meaning of this depends on the nature
+     of the default internal format).
+void eicat_lstr (ei, Lisp_Object lisp_string);
+     ... from a Lisp_Object string.
+void eicat_ch (ei, Ichar ch);
+     ... from an Ichar.
+
+All except the first variety are convenience functions.
+n the general case, create another Eistring from the source.)
+
+
+ ********************************************** 
+ *                Replacement                 * 
+ ********************************************** 
+
+void eisub_* (Eistring *eistr, Bytecount off, Charcount charoff,
+     			  Bytecount len, Charcount charlen, ...);
+     Replace a section of the Eistring, specifically:
+
+void eisub_ei (Eistring *eistr, Bytecount off, Charcount charoff,
+     	  Bytecount len, Charcount charlen, Eistring *eistr2);
+     ... with another Eistring.
+void eisub_c (Eistring *eistr, Bytecount off, Charcount charoff,
+     	 Bytecount len, Charcount charlen, Ascbyte *c_string);
+     ... with an ASCII null-terminated string.  Non-ASCII characters in
+     the string are @strong{ILLEGAL} (read @code{abort()} with error-checking defined).
+void eisub_ch (Eistring *eistr, Bytecount off, Charcount charoff,
+     	  Bytecount len, Charcount charlen, Ichar ch);
+     ... with an Ichar.
+
+void eidel (Eistring *eistr, Bytecount off, Charcount charoff,
+            Bytecount len, Charcount charlen);
+     Delete a section of the Eistring.
+
+
+ ********************************************** 
+ *      Converting to an external format      * 
+ ********************************************** 
+
+void eito_external (Eistring *eistr, Lisp_Object codesys);
+     Convert the Eistring to an external format and store the result
+     in the string.  NOTE: Further changes to the Eistring will @strong{NOT}
+     change the external data stored in the string.  You will have to
+     call @code{eito_external()} again in such a case if you want the external
+     data.
+
+Extbyte *eiextdata (Eistring *eistr);
+     Return a pointer to the external data stored in the Eistring as
+     a result of a prior call to @code{eito_external()}.
+
+Bytecount eiextlen (Eistring *eistr);
+     Return the length in bytes of the external data stored in the
+     Eistring as a result of a prior call to @code{eito_external()}.
+
+
+ ********************************************** 
+ * Searching in the Eistring for a character  * 
+ ********************************************** 
+
+Bytecount eichr (Eistring *eistr, Ichar chr);
+Charcount eichr_char (Eistring *eistr, Ichar chr);
+Bytecount eichr_off (Eistring *eistr, Ichar chr, Bytecount off,
+     		Charcount charoff);
+Charcount eichr_off_char (Eistring *eistr, Ichar chr, Bytecount off,
+     		     Charcount charoff);
+Bytecount eirchr (Eistring *eistr, Ichar chr);
+Charcount eirchr_char (Eistring *eistr, Ichar chr);
+Bytecount eirchr_off (Eistring *eistr, Ichar chr, Bytecount off,
+     		 Charcount charoff);
+Charcount eirchr_off_char (Eistring *eistr, Ichar chr, Bytecount off,
+     		      Charcount charoff);
+
+
+ ********************************************** 
+ *   Searching in the Eistring for a string   * 
+ ********************************************** 
+
+Bytecount eistr_ei (Eistring *eistr, Eistring *eistr2);
+Charcount eistr_ei_char (Eistring *eistr, Eistring *eistr2);
+Bytecount eistr_ei_off (Eistring *eistr, Eistring *eistr2, Bytecount off,
+     		   Charcount charoff);
+Charcount eistr_ei_off_char (Eistring *eistr, Eistring *eistr2,
+     			Bytecount off, Charcount charoff);
+Bytecount eirstr_ei (Eistring *eistr, Eistring *eistr2);
+Charcount eirstr_ei_char (Eistring *eistr, Eistring *eistr2);
+Bytecount eirstr_ei_off (Eistring *eistr, Eistring *eistr2, Bytecount off,
+     		    Charcount charoff);
+Charcount eirstr_ei_off_char (Eistring *eistr, Eistring *eistr2,
+     			 Bytecount off, Charcount charoff);
+
+Bytecount eistr_c (Eistring *eistr, Ascbyte *c_string);
+Charcount eistr_c_char (Eistring *eistr, Ascbyte *c_string);
+Bytecount eistr_c_off (Eistring *eistr, Ascbyte *c_string, Bytecount off,
+     		   Charcount charoff);
+Charcount eistr_c_off_char (Eistring *eistr, Ascbyte *c_string,
+     		       Bytecount off, Charcount charoff);
+Bytecount eirstr_c (Eistring *eistr, Ascbyte *c_string);
+Charcount eirstr_c_char (Eistring *eistr, Ascbyte *c_string);
+Bytecount eirstr_c_off (Eistring *eistr, Ascbyte *c_string,
+     		   Bytecount off, Charcount charoff);
+Charcount eirstr_c_off_char (Eistring *eistr, Ascbyte *c_string,
+     			Bytecount off, Charcount charoff);
+
+
+ ********************************************** 
+ *                 Comparison                 * 
+ ********************************************** 
+
+int eicmp_* (Eistring *eistr, ...);
+int eicmp_off_* (Eistring *eistr, Bytecount off, Charcount charoff,
+                 Bytecount len, Charcount charlen, ...);
+int eicasecmp_* (Eistring *eistr, ...);
+int eicasecmp_off_* (Eistring *eistr, Bytecount off, Charcount charoff,
+                     Bytecount len, Charcount charlen, ...);
+int eicasecmp_i18n_* (Eistring *eistr, ...);
+int eicasecmp_i18n_off_* (Eistring *eistr, Bytecount off, Charcount charoff,
+                          Bytecount len, Charcount charlen, ...);
+
+     Compare the Eistring with the other data.  Return value same as
+     from strcmp.  The `*' is either `ei' for another Eistring (in
+     which case `...' is an Eistring), or `c' for a pure-ASCII string
+     (in which case `...' is a pointer to that string).  For anything
+     more complex, first create an Eistring out of the source.
+     Comparison is either simple (`eicmp_...'), ASCII case-folding
+     (`eicasecmp_...'), or multilingual case-folding
+     (`eicasecmp_i18n_...).
+
+
+More specifically, the prototypes are:
+
+int eicmp_ei (Eistring *eistr, Eistring *eistr2);
+int eicmp_off_ei (Eistring *eistr, Bytecount off, Charcount charoff,
+                  Bytecount len, Charcount charlen, Eistring *eistr2);
+int eicasecmp_ei (Eistring *eistr, Eistring *eistr2);
+int eicasecmp_off_ei (Eistring *eistr, Bytecount off, Charcount charoff,
+                      Bytecount len, Charcount charlen, Eistring *eistr2);
+int eicasecmp_i18n_ei (Eistring *eistr, Eistring *eistr2);
+int eicasecmp_i18n_off_ei (Eistring *eistr, Bytecount off,
+     		      Charcount charoff, Bytecount len,
+     		      Charcount charlen, Eistring *eistr2);
+
+int eicmp_c (Eistring *eistr, Ascbyte *c_string);
+int eicmp_off_c (Eistring *eistr, Bytecount off, Charcount charoff,
+                 Bytecount len, Charcount charlen, Ascbyte *c_string);
+int eicasecmp_c (Eistring *eistr, Ascbyte *c_string);
+int eicasecmp_off_c (Eistring *eistr, Bytecount off, Charcount charoff,
+                     Bytecount len, Charcount charlen,
+                     Ascbyte *c_string);
+int eicasecmp_i18n_c (Eistring *eistr, Ascbyte *c_string);
+int eicasecmp_i18n_off_c (Eistring *eistr, Bytecount off, Charcount charoff,
+                          Bytecount len, Charcount charlen,
+                          Ascbyte *c_string);
+
+
+ ********************************************** 
+ *         Case-changing the Eistring         * 
+ ********************************************** 
+
+void eilwr (Eistring *eistr);
+     Convert all characters in the Eistring to lowercase.
+void eiupr (Eistring *eistr);
+     Convert all characters in the Eistring to uppercase.
+@end example
+
+@node Coding for Mule, CCL, Internal Text API's, Multilingual Support
+@section Coding for Mule
+@cindex coding for Mule
+@cindex Mule, coding for
+
+Although Mule support is not compiled by default in XEmacs, many people
+are using it, and we consider it crucial that new code works correctly
+with multibyte characters.  This is not hard; it is only a matter of
+following several simple user-interface guidelines.  Even if you never
+compile with Mule, with a little practice you will find it quite easy
+to code Mule-correctly.
+
+Note that these guidelines are not necessarily tied to the current Mule
+implementation; they are also a good idea to follow on the grounds of
+code generalization for future I18N work.
+
+@menu
+* Character-Related Data Types::  
+* Working With Character and Byte Positions::  
+* Conversion to and from External Data::  
+* General Guidelines for Writing Mule-Aware Code::  
+* An Example of Mule-Aware Code::  
+* Mule-izing Code::             
+@end menu
+
+@node Character-Related Data Types, Working With Character and Byte Positions, Coding for Mule, Coding for Mule
+@subsection Character-Related Data Types
+@cindex character-related data types
+@cindex data types, character-related
+
+First, let's review the basic character-related datatypes used by
+XEmacs.  Note that some of the separate @code{typedef}s are not
+mandatory, but they improve clarity of code a great deal, because one
+glance at the declaration can tell the intended use of the variable.
+
+@table @code
+@item Ichar
+@cindex Ichar
+An @code{Ichar} holds a single Emacs character.
+
+Obviously, the equality between characters and bytes is lost in the Mule
+world.  Characters can be represented by one or more bytes in the
+buffer, and @code{Ichar} is a C type large enough to hold any
+character.  (This currently isn't quite true for ISO 10646, which
+defines a character as a 31-bit non-negative quantity, while XEmacs
+characters are only 30-bits.  This is irrelevant, unless you are
+considering using the ISO 10646 private groups to support really large
+private character sets---in particular, the Mule character set!---in
+a version of XEmacs using Unicode internally.)
+
+Without Mule support, an @code{Ichar} is equivalent to an
+@code{unsigned char}.  [[This doesn't seem to be true; @file{lisp.h}
+unconditionally @samp{typedef}s @code{Ichar} to @code{int}.]]
+
+@item Ibyte
+@cindex Ibyte
+The data representing the text in a buffer or string is logically a set
+of @code{Ibyte}s.
+
+XEmacs does not work with the same character formats all the time; when
+reading characters from the outside, it decodes them to an internal
+format, and likewise encodes them when writing.  @code{Ibyte} (in fact
+@code{unsigned char}) is the basic unit of XEmacs internal buffers and
+strings format.  An @code{Ibyte *} is the type that points at text
+encoded in the variable-width internal encoding.
+
+One character can correspond to one or more @code{Ibyte}s.  In the
+current Mule implementation, an ASCII character is represented by the
+same @code{Ibyte}, and other characters are represented by a sequence
+of two or more @code{Ibyte}s.  (This will also be true of an
+implementation using UTF-8 as the internal encoding.  In fact, only code
+that implements character code conversions and a very few macros used to
+implement motion by whole characters will notice the difference between
+UTF-8 and the Mule encoding.)
+
+Without Mule support, there are exactly 256 characters, implicitly
+Latin-1, and each character is represented using one @code{Ibyte}, and
+there is a one-to-one correspondence between @code{Ibyte}s and
+@code{Ichar}s.
+
+@item Charxpos
+@item Charbpos
+@itemx Charcount
+@cindex Charxpos
+@cindex Charbpos
+@cindex Charcount
+A @code{Charbpos} represents a character position in a buffer.  A
+@code{Charcount} represents a number (count) of characters.  Logically,
+subtracting two @code{Charbpos} values yields a @code{Charcount} value.
+When representing a character position in a string, we just use
+@code{Charcount} directly.  The reason for having a separate typedef for
+buffer positions is that they are 1-based, whereas string positions are
+0-based and hence string counts and positions can be freely intermixed (a
+string position is equivalent to the count of characters from the
+beginning).  When representing a character position that could be either
+in a buffer or string (for example, in the extent code), @code{Charxpos}
+is used.  Although all of these are @code{typedef}ed to
+@code{EMACS_INT}, we use them in preference to @code{EMACS_INT} to make
+it clear what sort of position is being used.
+
+@code{Charxpos}, @code{Charbpos} and @code{Charcount} values are the
+only ones that are ever visible to Lisp.
+
+@item Bytexpos
+@itemx Bytecount
+@cindex Bytebpos
+@cindex Bytecount
+A @code{Bytebpos} represents a byte position in a buffer.  A
+@code{Bytecount} represents the distance between two positions, in
+bytes.  Byte positions in strings use @code{Bytecount}, and for byte
+positions that can be either in a buffer or string, @code{Bytexpos} is
+used.  The relationship between @code{Bytexpos}, @code{Bytebpos} and
+@code{Bytecount} is the same as the relationship between
+@code{Charxpos}, @code{Charbpos} and @code{Charcount}.
+
+@item Extbyte
+@cindex Extbyte
+When dealing with the outside world, XEmacs works with @code{Extbyte}s,
+which are equivalent to @code{char}.  The distance between two
+@code{Extbyte}s is a @code{Bytecount}, since external text is a
+byte-by-byte encoding.  Extbytes occur mainly at the transition point
+between internal text and external functions.  XEmacs code should not,
+if it can possibly avoid it, do any actual manipulation using external
+text, since its format is completely unpredictable (it might not even be
+ASCII-compatible).
+@end table
+
+@node Working With Character and Byte Positions, Conversion to and from External Data, Character-Related Data Types, Coding for Mule
+@subsection Working With Character and Byte Positions
+@cindex character and byte positions, working with
+@cindex byte positions, working with character and
+@cindex positions, working with character and byte
+
+Now that we have defined the basic character-related types, we can look
+at the macros and functions designed for work with them and for
+conversion between them.  Most of these macros are defined in
+@file{buffer.h}, and we don't discuss all of them here, but only the
+most important ones.  Examining the existing code is the best way to
+learn about them.
+
+@table @code
+@item MAX_ICHAR_LEN
+@cindex MAX_ICHAR_LEN
+This preprocessor constant is the maximum number of buffer bytes to
+represent an Emacs character in the variable width internal encoding.
+It is useful when allocating temporary strings to keep a known number of
+characters.  For instance:
+
+@example
+@group
+@{
+  Charcount cclen;
+  ...
+  @{
+    /* Allocate place for @var{cclen} characters. */
+    Ibyte *buf = (Ibyte *) alloca (cclen * MAX_ICHAR_LEN);
+...
+@end group
+@end example
+
+If you followed the previous section, you can guess that, logically,
+multiplying a @code{Charcount} value with @code{MAX_ICHAR_LEN} produces
+a @code{Bytecount} value.
+
+In the current Mule implementation, @code{MAX_ICHAR_LEN} equals 4.
+Without Mule, it is 1.  In a mature Unicode-based XEmacs, it will also
+be 4 (since all Unicode characters can be encoded in UTF-8 in 4 bytes or
+less), but some versions may use up to 6, in order to use the large
+private space provided by ISO 10646 to ``mirror'' the Mule code space.
+
+@item itext_ichar
+@itemx set_itext_ichar
+@cindex itext_ichar
+@cindex set_itext_ichar
+The @code{itext_ichar} macro takes a @code{Ibyte} pointer and
+returns the @code{Ichar} stored at that position.  If it were a
+function, its prototype would be:
+
+@example
+Ichar itext_ichar (Ibyte *p);
+@end example
+
+@code{set_itext_ichar} stores an @code{Ichar} to the specified byte
+position.  It returns the number of bytes stored:
+
+@example
+Bytecount set_itext_ichar (Ibyte *p, Ichar c);
+@end example
+
+It is important to note that @code{set_itext_ichar} is safe only for
+appending a character at the end of a buffer, not for overwriting a
+character in the middle.  This is because the width of characters
+varies, and @code{set_itext_ichar} cannot resize the string if it
+writes, say, a two-byte character where a single-byte character used to
+reside.
+
+A typical use of @code{set_itext_ichar} can be demonstrated by this
+example, which copies characters from buffer @var{buf} to a temporary
+string of Ibytes.
+
+@example
+@group
+@{
+  Charbpos pos;
+  for (pos = beg; pos < end; pos++)
+    @{
+      Ichar c = BUF_FETCH_CHAR (buf, pos);
+      p += set_itext_ichar (buf, c);
+    @}
+@}
+@end group
+@end example
+
+Note how @code{set_itext_ichar} is used to store the @code{Ichar}
+and increment the counter, at the same time.
+
+@item INC_IBYTEPTR
+@itemx DEC_IBYTEPTR
+@cindex INC_IBYTEPTR
+@cindex DEC_IBYTEPTR
+These two macros increment and decrement an @code{Ibyte} pointer,
+respectively.  They will adjust the pointer by the appropriate number of
+bytes according to the byte length of the character stored there.  Both
+macros assume that the memory address is located at the beginning of a
+valid character.
+
+Without Mule support, @code{INC_IBYTEPTR (p)} and @code{DEC_IBYTEPTR (p)}
+simply expand to @code{p++} and @code{p--}, respectively.
+
+@item bytecount_to_charcount
+@cindex bytecount_to_charcount
+Given a pointer to a text string and a length in bytes, return the
+equivalent length in characters.
+
+@example
+Charcount bytecount_to_charcount (Ibyte *p, Bytecount bc);
+@end example
+
+@item charcount_to_bytecount
+@cindex charcount_to_bytecount
+Given a pointer to a text string and a length in characters, return the
+equivalent length in bytes.
+
+@example
+Bytecount charcount_to_bytecount (Ibyte *p, Charcount cc);
+@end example
+
+@item itext_n_addr
+@cindex itext_n_addr
+Return a pointer to the beginning of the character offset @var{cc} (in
+characters) from @var{p}.
+
+@example
+Ibyte *itext_n_addr (Ibyte *p, Charcount cc);
+@end example
+@end table
+
+@node Conversion to and from External Data, General Guidelines for Writing Mule-Aware Code, Working With Character and Byte Positions, Coding for Mule
+@subsection Conversion to and from External Data
+@cindex conversion to and from external data
+@cindex external data, conversion to and from
+
+When an external function, such as a C library function, returns a
+@code{char} pointer, you should almost never treat it as @code{Ibyte}.
+This is because these returned strings may contain 8bit characters which
+can be misinterpreted by XEmacs, and cause a crash.  Likewise, when
+exporting a piece of internal text to the outside world, you should
+always convert it to an appropriate external encoding, lest the internal
+stuff (such as the infamous \201 characters) leak out.
+
+The interface to conversion between the internal and external
+representations of text are the numerous conversion macros defined in
+@file{buffer.h}.  There used to be a fixed set of external formats
+supported by these macros, but now any coding system can be used with
+them.  The coding system alias mechanism is used to create the
+following logical coding systems, which replace the fixed external
+formats.  The (dontusethis-set-symbol-value-handler) mechanism was
+enhanced to make this possible (more work on that is needed).
+
+Often useful coding systems:
+
+@table @code
+@item Qbinary
+This is the simplest format and is what we use in the absence of a more
+appropriate format.  This converts according to the @code{binary} coding
+system:
+
+@enumerate a
+@item
+On input, bytes 0--255 are converted into (implicitly Latin-1)
+characters 0--255.  A non-Mule xemacs doesn't really know about
+different character sets and the fonts to display them, so the bytes can
+be treated as text in different 1-byte encodings by simply setting the
+appropriate fonts.  So in a sense, non-Mule xemacs is a multi-lingual
+editor if, for example, different fonts are used to display text in
+different buffers, faces, or windows.  The specifier mechanism gives the
+user complete control over this kind of behavior.
+@item
+On output, characters 0--255 are converted into bytes 0--255 and other
+characters are converted into @samp{~}.
+@end enumerate
+
+@item Qnative
+Format used for the external Unix environment---@code{argv[]}, stuff
+from @code{getenv()}, stuff from the @file{/etc/passwd} file, etc.
+This is encoded according to the encoding specified by the current locale.
+[[This is dangerous; current locale is user preference, and the system
+is probably going to be something else.  Is there anything we can do
+about it?]]
+
+@item Qfile_name
+Format used for filenames.  This is normally the same as @code{Qnative},
+but the two should be distinguished for clarity and possible future
+separation -- and also because @code{Qfile_name} can be changed using either
+the @code{file-name-coding-system} or @code{pathname-coding-system} (now
+obsolete) variables.
+
+@item Qctext
+Compound-text format.  This is the standard X11 format used for data
+stored in properties, selections, and the like.  This is an 8-bit
+no-lock-shift ISO2022 coding system.  This is a real coding system,
+unlike @code{Qfile_name}, which is user-definable.
+
+@item Qmswindows_tstr
+Used for external data in all MS Windows functions that are declared to
+accept data of type @code{LPTSTR} or @code{LPCSTR}.  This maps to either
+@code{Qmswindows_multibyte} (a locale-specific encoding, same as
+@code{Qnative}) or @code{Qmswindows_unicode}, depending on whether
+XEmacs is being run under Windows 9X or Windows NT/2000/XP.
+@end table
+
+Many other coding systems are provided by default.
+
+There are two fundamental macros to convert between external and
+internal format, as well as various convenience macros to simplify the
+most common operations.
+
+@code{TO_INTERNAL_FORMAT} converts external data to internal format, and
+@code{TO_EXTERNAL_FORMAT} converts the other way around.  The arguments
+each of these receives are a source type, a source, a sink type, a sink,
+and a coding system (or a symbol naming a coding system).
+
+A typical call looks like
+@example
+TO_EXTERNAL_FORMAT (LISP_STRING, str, C_STRING_MALLOC, ptr, Qfile_name);
+@end example
+
+which means that the contents of the lisp string @code{str} are written
+to a malloc'ed memory area which will be pointed to by @code{ptr}, after
+the function returns.  The conversion will be done using the
+@code{file-name} coding system, which will be controlled by the user
+indirectly by setting or binding the variable
+@code{file-name-coding-system}.
+
+Some sources and sinks require two C variables to specify.  We use some
+preprocessor magic to allow different source and sink types, and even
+different numbers of arguments to specify different types of sources and
+sinks.
+
+So we can have a call that looks like
+@example
+TO_INTERNAL_FORMAT (DATA, (ptr, len),
+                    MALLOC, (ptr, len),
+                    coding_system);
+@end example
+
+The parenthesized argument pairs are required to make the preprocessor
+magic work.
+
+Here are the different source and sink types:
+
+@table @code
+@item @code{DATA, (ptr, len),}
+input data is a fixed buffer of size @var{len} at address @var{ptr}
+@item @code{ALLOCA, (ptr, len),}
+output data is placed in an @code{alloca()}ed buffer of size @var{len} pointed to by @var{ptr}
+@item @code{MALLOC, (ptr, len),}
+output data is in a @code{malloc()}ed buffer of size @var{len} pointed to by @var{ptr}
+@item @code{C_STRING_ALLOCA, ptr,}
+equivalent to @code{ALLOCA (ptr, len_ignored)} on output.
+@item @code{C_STRING_MALLOC, ptr,}
+equivalent to @code{MALLOC (ptr, len_ignored)} on output
+@item @code{C_STRING, ptr,}
+equivalent to @code{DATA, (ptr, strlen/wcslen (ptr))} on input
+@item @code{LISP_STRING, string,}
+input or output is a Lisp_Object of type string
+@item @code{LISP_BUFFER, buffer,}
+output is written to @code{(point)} in lisp buffer @var{buffer}
+@item @code{LISP_LSTREAM, lstream,}
+input or output is a Lisp_Object of type lstream
+@item @code{LISP_OPAQUE, object,}
+input or output is a Lisp_Object of type opaque
+@end table
+
+A source type of @code{C_STRING} or a sink type of
+@code{C_STRING_ALLOCA} or @code{C_STRING_MALLOC} is appropriate where
+the external API is not '\0'-byte-clean -- i.e. it expects strings to be
+terminated with a null byte.  For external API's that are in fact
+'\0'-byte-clean, we should of course not use these.
+
+The sinks to be specified must be lvalues, unless they are the lisp
+object types @code{LISP_LSTREAM} or @code{LISP_BUFFER}.
+
+There is no problem using the same lvalue for source and sink.
+
+Garbage collection is inhibited during these conversion operations, so
+it is OK to pass in data from Lisp strings using @code{XSTRING_DATA}.
+
+For the sink types @code{ALLOCA} and @code{C_STRING_ALLOCA}, the
+resulting text is stored in a stack-allocated buffer, which is
+automatically freed on returning from the function.  However, the sink
+types @code{MALLOC} and @code{C_STRING_MALLOC} return @code{xmalloc()}ed
+memory.  The caller is responsible for freeing this memory using
+@code{xfree()}.
+
+Note that it doesn't make sense for @code{LISP_STRING} to be a source
+for @code{TO_INTERNAL_FORMAT} or a sink for @code{TO_EXTERNAL_FORMAT}.
+You'll get an assertion failure if you try.
+
+99% of conversions involve raw data or Lisp strings as both source and
+sink, and usually data is output as @code{alloca()}, or sometimes
+@code{xmalloc()}.  For this reason, convenience macros are defined for
+many types of conversions involving raw data and/or Lisp strings,
+especially when the output is an @code{alloca()}ed string. (When the
+destination is a Lisp string, there are other functions that should be
+used instead -- @code{build_ext_string()} and @code{make_ext_string()},
+for example.) The convenience macros are of two types -- the older kind
+that store the result into a specified variable, and the newer kind that
+return the result.  The newer kind of macros don't exist when the output
+is sized data, because that would have two return values.  NOTE: All
+convenience macros are ultimately defined in terms of
+@code{TO_EXTERNAL_FORMAT} and @code{TO_INTERNAL_FORMAT}.  Thus, any
+comments above about the workings of these macros also apply to all
+convenience macros.
+
+A typical old-style convenience macro is
+
+@example
+  C_STRING_TO_EXTERNAL (in, out, codesys);
+@end example
+
+This is equivalent to
+
+@example
+  TO_EXTERNAL_FORMAT (C_STRING, in, C_STRING_ALLOCA, out, codesys);
+@end example
+
+but is easier to write and somewhat clearer, since it clearly identifies
+the arguments without the clutter of having the preprocessor types mixed
+in.
+
+The new-style equivalent is @code{NEW_C_STRING_TO_EXTERNAL (src,
+codesys)}, which @emph{returns} the converted data (still in
+@code{alloca()} space).  This is far more convenient for most
+operations.
+
+@node General Guidelines for Writing Mule-Aware Code, An Example of Mule-Aware Code, Conversion to and from External Data, Coding for Mule
+@subsection General Guidelines for Writing Mule-Aware Code
+@cindex writing Mule-aware code, general guidelines for
+@cindex Mule-aware code, general guidelines for writing
+@cindex code, general guidelines for writing Mule-aware
+
+This section contains some general guidance on how to write Mule-aware
+code, as well as some pitfalls you should avoid.
+
+@table @emph
+@item Never use @code{char} and @code{char *}.
+In XEmacs, the use of @code{char} and @code{char *} is almost always a
+mistake.  If you want to manipulate an Emacs character from ``C'', use
+@code{Ichar}.  If you want to examine a specific octet in the internal
+format, use @code{Ibyte}.  If you want a Lisp-visible character, use a
+@code{Lisp_Object} and @code{make_char}.  If you want a pointer to move
+through the internal text, use @code{Ibyte *}.  Also note that you
+almost certainly do not need @code{Ichar *}.  Other typedefs to clarify
+the use of @code{char} are @code{Char_ASCII}, @code{Char_Binary},
+@code{UChar_Binary}, and @code{CIbyte}.
+
+@item Be careful not to confuse @code{Charcount}, @code{Bytecount}, @code{Charbpos} and @code{Bytebpos}.
+The whole point of using different types is to avoid confusion about the
+use of certain variables.  Lest this effect be nullified, you need to be
+careful about using the right types.
+
+@item Always convert external data
+It is extremely important to always convert external data, because
+XEmacs can crash if unexpected 8-bit sequences are copied to its internal
+buffers literally.
+
+This means that when a system function, such as @code{readdir}, returns
+a string, you normally need to convert it using one of the conversion macros
+described in the previous chapter, before passing it further to Lisp.
+
+Actually, most of the basic system functions that accept '\0'-terminated
+string arguments, like @code{stat()} and @code{open()}, have
+@strong{encapsulated} equivalents that do the internal to external
+conversion themselves.  The encapsulated equivalents have a @code{qxe_}
+prefix and have string arguments of type @code{Ibyte *}, and you can
+pass internally encoded data to them, often from a Lisp string using
+@code{XSTRING_DATA}. (A better design might be to provide versions that
+accept Lisp strings directly.)  [[Really?  Then they'd either take
+@code{Lisp_Object}s and need to check type, or they'd take
+@code{Lisp_String}s, and violate the rules about passing any of the
+specific Lisp types.]]
+
+Also note that many internal functions, such as @code{make_string},
+accept Ibytes, which removes the need for them to convert the data they
+receive.  This increases efficiency because that way external data needs
+to be decoded only once, when it is read.  After that, it is passed
+around in internal format.
+
+@item Do all work in internal format
+External-formatted data is completely unpredictable in its format.  It
+may be fixed-width Unicode (not even ASCII compatible); it may be a
+modal encoding, in
+which case some occurrences of (e.g.) the slash character may be part of
+two-byte Asian-language characters, and a naive attempt to split apart a
+pathname by slashes will fail; etc.  Internal-format text should be
+converted to external format only at the point where an external API is
+actually called, and the first thing done after receiving
+external-format text from an external API should be to convert it to
+internal text.
+@end table
+
+@node An Example of Mule-Aware Code, Mule-izing Code, General Guidelines for Writing Mule-Aware Code, Coding for Mule
+@subsection An Example of Mule-Aware Code
+@cindex code, an example of Mule-aware
+@cindex Mule-aware code, an example of
+
+As an example of Mule-aware code, we will analyze the @code{string}
+function, which conses up a Lisp string from the character arguments it
+receives.  Here is the definition, pasted from @code{alloc.c}:
+
+@example
+@group
+DEFUN ("string", Fstring, 0, MANY, 0, /*
+Concatenate all the argument characters and make the result a string.
+*/
+       (int nargs, Lisp_Object *args))
+@{
+  Ibyte *storage = alloca_array (Ibyte, nargs * MAX_ICHAR_LEN);
+  Ibyte *p = storage;
+
+  for (; nargs; nargs--, args++)
+    @{
+      Lisp_Object lisp_char = *args;
+      CHECK_CHAR_COERCE_INT (lisp_char);
+      p += set_itext_ichar (p, XCHAR (lisp_char));
+    @}
+  return make_string (storage, p - storage);
+@}
+@end group
+@end example
+
+Now we can analyze the source line by line.
+
+Obviously, string will be as long as there are arguments to the
+function.  This is why we allocate @code{MAX_ICHAR_LEN} * @var{nargs}
+bytes on the stack, i.e. the worst-case number of bytes for @var{nargs}
+@code{Ichar}s to fit in the string.
+
+Then, the loop checks that each element is a character, converting
+integers in the process.  Like many other functions in XEmacs, this
+function silently accepts integers where characters are expected, for
+historical and compatibility reasons.  Unless you know what you are
+doing, @code{CHECK_CHAR} will also suffice.  @code{XCHAR (lisp_char)}
+extracts the @code{Ichar} from the @code{Lisp_Object}, and
+@code{set_itext_ichar} stores it to storage, increasing @code{p} in
+the process.
+
+Other instructive examples of correct coding under Mule can be found all
+over the XEmacs code.  For starters, I recommend
+@code{Fnormalize_menu_item_name} in @file{menubar.c}.  After you have
+understood this section of the manual and studied the examples, you can
+proceed writing new Mule-aware code.
+
+@node Mule-izing Code,  , An Example of Mule-Aware Code, Coding for Mule
+@subsection Mule-izing Code
+
+A lot of code is written without Mule in mind, and needs to be made
+Mule-correct or "Mule-ized".  There is really no substitute for
+line-by-line analysis when doing this, but the following checklist can
+help:
+
+@itemize @bullet
+@item
+Check all uses of @code{XSTRING_DATA}.
+@item
+Check all uses of @code{build_string} and @code{make_string}.
+@item
+Check all uses of @code{tolower} and @code{toupper}.
+@item
+Check object print methods.
+@item
+Check for use of functions such as @code{write_c_string},
+@code{write_fmt_string}, @code{stderr_out}, @code{stdout_out}.
+@item
+Check all occurrences of @code{char} and correct to one of the other
+typedefs described above.
+@item
+Check all existing uses of @code{TO_EXTERNAL_FORMAT},
+@code{TO_INTERNAL_FORMAT}, and any convenience macros (grep for
+@samp{EXTERNAL_TO}, @samp{TO_EXTERNAL}, and @samp{TO_SIZED_EXTERNAL}).
+@item
+In Windows code, string literals may need to be encapsulated with @code{XETEXT}.
+@end itemize
+
+@node CCL, Modules for Internationalization, Coding for Mule, Multilingual Support
 @section CCL
 @cindex CCL
 
 @example
-CCL PROGRAM SYNTAX:
-     CCL_PROGRAM := (CCL_MAIN_BLOCK
-                     [ CCL_EOF_BLOCK ])
-
-     CCL_MAIN_BLOCK := CCL_BLOCK
-     CCL_EOF_BLOCK := CCL_BLOCK
-
-     CCL_BLOCK := STATEMENT | (STATEMENT [STATEMENT ...])
-     STATEMENT :=
-             SET | IF | BRANCH | LOOP | REPEAT | BREAK
-             | READ | WRITE
-
-     SET := (REG = EXPRESSION) | (REG SELF_OP EXPRESSION)
-            | INT-OR-CHAR
-
-     EXPRESSION := ARG | (EXPRESSION OP ARG)
-
-     IF := (if EXPRESSION CCL_BLOCK CCL_BLOCK)
-     BRANCH := (branch EXPRESSION CCL_BLOCK [CCL_BLOCK ...])
-     LOOP := (loop STATEMENT [STATEMENT ...])
-     BREAK := (break)
-     REPEAT := (repeat)
-             | (write-repeat [REG | INT-OR-CHAR | string])
-             | (write-read-repeat REG [INT-OR-CHAR | string | ARRAY]?)
-     READ := (read REG) | (read REG REG)
-             | (read-if REG ARITH_OP ARG CCL_BLOCK CCL_BLOCK)
-             | (read-branch REG CCL_BLOCK [CCL_BLOCK ...])
-     WRITE := (write REG) | (write REG REG)
-             | (write INT-OR-CHAR) | (write STRING) | STRING
-             | (write REG ARRAY)
-     END := (end)
-
-     REG := r0 | r1 | r2 | r3 | r4 | r5 | r6 | r7
-     ARG := REG | INT-OR-CHAR
-     OP :=   + | - | * | / | % | & | '|' | ^ | << | >> | <8 | >8 | //
-             | < | > | == | <= | >= | !=
-     SELF_OP :=
-             += | -= | *= | /= | %= | &= | '|=' | ^= | <<= | >>=
-     ARRAY := '[' INT-OR-CHAR ... ']'
-     INT-OR-CHAR := INT | CHAR
-
 MACHINE CODE:
 
 The machine code consists of a vector of 32-bit words.
@@ -9587,7 +12664,88 @@
                 ..........AAAAA
 @end example
 
-@node The Lisp Reader and Compiler, Lstreams, MULE Character Sets and Encodings, Top
+@node Modules for Internationalization,  , CCL, Multilingual Support
+@section Modules for Internationalization
+@cindex modules for internationalization
+@cindex internationalization, modules for
+
+@example
+@file{mule-canna.c}
+@file{mule-ccl.c}
+@file{mule-charset.c}
+@file{mule-charset.h}
+@file{file-coding.c}
+@file{file-coding.h}
+@file{mule-coding.c}
+@file{mule-mcpath.c}
+@file{mule-mcpath.h}
+@file{mule-wnnfns.c}
+@file{mule.c}
+@end example
+
+These files implement the MULE (Asian-language) support.  Note that MULE
+actually provides a general interface for all sorts of languages, not
+just Asian languages (although they are generally the most complicated
+to support).  This code is still in beta.
+
+@file{mule-charset.*} and @file{file-coding.*} provide the heart of the
+XEmacs MULE support.  @file{mule-charset.*} implements the @dfn{charset}
+Lisp object type, which encapsulates a character set (an ordered one- or
+two-dimensional set of characters, such as US ASCII or JISX0208 Japanese
+Kanji).
+
+@file{file-coding.*} implements the @dfn{coding-system} Lisp object
+type, which encapsulates a method of converting between different
+encodings.  An encoding is a representation of a stream of characters,
+possibly from multiple character sets, using a stream of bytes or words,
+and defines (e.g.) which escape sequences are used to specify particular
+character sets, how the indices for a character are converted into bytes
+(sometimes this involves setting the high bit; sometimes complicated
+rearranging of the values takes place, as in the Shift-JIS encoding),
+etc.  It also contains some generic coding system implementations, such
+as the binary (no-conversion) coding system and a sample gzip coding system.
+
+@file{mule-coding.c} contains the implementations of text coding systems.
+
+@file{mule-ccl.c} provides the CCL (Code Conversion Language)
+interpreter.  CCL is similar in spirit to Lisp byte code and is used to
+implement converters for custom encodings.
+
+@file{mule-canna.c} and @file{mule-wnnfns.c} implement interfaces to
+external programs used to implement the Canna and WNN input methods,
+respectively.  This is currently in beta.
+
+@file{mule-mcpath.c} provides some functions to allow for pathnames
+containing extended characters.  This code is fragmentary, obsolete, and
+completely non-working.  Instead, @code{pathname-coding-system} is used
+to specify conversions of names of files and directories.  The standard
+C I/O functions like @samp{open()} are wrapped so that conversion occurs
+automatically.
+
+@file{mule.c} contains a few miscellaneous things.  It currently seems
+to be unused and probably should be removed.
+
+
+
+@example
+@file{intl.c}
+@end example
+
+This provides some miscellaneous internationalization code for
+implementing message translation and interfacing to the Ximp input
+method.  None of this code is currently working.
+
+
+
+@example
+@file{iso-wide.h}
+@end example
+
+This contains leftover code from an earlier implementation of
+Asian-language support, and is not currently used.
+
+
+@node The Lisp Reader and Compiler, Lstreams, Multilingual Support, Top
 @chapter The Lisp Reader and Compiler
 @cindex Lisp reader and compiler, the
 @cindex reader and compiler, the Lisp
@@ -9616,7 +12774,7 @@
 * Lstream Methods::             Creating new lstream types.
 @end menu
 
-@node Creating an Lstream
+@node Creating an Lstream, Lstream Types, Lstreams, Lstreams
 @section Creating an Lstream
 @cindex lstream, creating an
 
@@ -9648,7 +12806,7 @@
   Open for writing, but never writes partial MULE characters.
 @end table
 
-@node Lstream Types
+@node Lstream Types, Lstream Functions, Creating an Lstream, Lstreams
 @section Lstream Types
 @cindex lstream types
 @cindex types, lstream
@@ -9675,7 +12833,7 @@
 @item encoding
 @end table
 
-@node Lstream Functions
+@node Lstream Functions, Lstream Methods, Lstream Types, Lstreams
 @section Lstream Functions
 @cindex lstream functions
 
@@ -9759,7 +12917,7 @@
 Rewind the stream to the beginning.
 @end deftypefun
 
-@node Lstream Methods
+@node Lstream Methods,  , Lstream Functions, Lstreams
 @section Lstream Methods
 @cindex lstream methods
 
@@ -9833,13 +12991,14 @@
 @cindex windows, consoles; devices; frames;
 
 @menu
-* Introduction to Consoles; Devices; Frames; Windows::
-* Point::
-* Window Hierarchy::
-* The Window Object::
+* Introduction to Consoles; Devices; Frames; Windows::  
+* Point::                       
+* Window Hierarchy::            
+* The Window Object::           
+* Modules for the Basic Displayable Lisp Objects::  
 @end menu
 
-@node Introduction to Consoles; Devices; Frames; Windows
+@node Introduction to Consoles; Devices; Frames; Windows, Point, Consoles; Devices; Frames; Windows, Consoles; Devices; Frames; Windows
 @section Introduction to Consoles; Devices; Frames; Windows
 @cindex consoles; devices; frames; windows, introduction to
 @cindex devices; frames; windows, introduction to consoles;
@@ -9885,7 +13044,7 @@
 within it to become the selected window.  Similar relationships apply
 for consoles to devices and devices to frames.
 
-@node Point
+@node Point, Window Hierarchy, Introduction to Consoles; Devices; Frames; Windows, Consoles; Devices; Frames; Windows
 @section Point
 @cindex point
 
@@ -9907,7 +13066,7 @@
 buffer's point instead.  This is related to why @code{save-window-excursion}
 does not save the selected window's value of @code{point}.
 
-@node Window Hierarchy
+@node Window Hierarchy, The Window Object, Point, Consoles; Devices; Frames; Windows
 @section Window Hierarchy
 @cindex window hierarchy
 @cindex hierarchy of windows
@@ -10005,7 +13164,7 @@
 artifact that should be fixed.)
 @end enumerate
 
-@node The Window Object
+@node The Window Object, Modules for the Basic Displayable Lisp Objects, Window Hierarchy, Consoles; Devices; Frames; Windows
 @section The Window Object
 @cindex window object, the
 @cindex object, the window
@@ -10112,6 +13271,99 @@
 this field is @code{nil}.
 @end table
 
+@node Modules for the Basic Displayable Lisp Objects,  , The Window Object, Consoles; Devices; Frames; Windows
+@section Modules for the Basic Displayable Lisp Objects
+@cindex modules for the basic displayable Lisp objects
+@cindex displayable Lisp objects, modules for the basic
+@cindex Lisp objects, modules for the basic displayable
+@cindex objects, modules for the basic displayable Lisp
+
+@example
+@file{console-msw.c}
+@file{console-msw.h}
+@file{console-stream.c}
+@file{console-stream.h}
+@file{console-tty.c}
+@file{console-tty.h}
+@file{console-x.c}
+@file{console-x.h}
+@file{console.c}
+@file{console.h}
+@end example
+
+These modules implement the @dfn{console} Lisp object type.  A console
+contains multiple display devices, but only one keyboard and mouse.
+Most of the time, a console will contain exactly one device.
+
+Consoles are the top of a lisp object inclusion hierarchy.  Consoles
+contain devices, which contain frames, which contain windows.
+
+
+
+@example
+@file{device-msw.c}
+@file{device-tty.c}
+@file{device-x.c}
+@file{device.c}
+@file{device.h}
+@end example
+
+These modules implement the @dfn{device} Lisp object type.  This
+abstracts a particular screen or connection on which frames are
+displayed.  As with Lisp objects, event interfaces, and other
+subsystems, the device code is separated into a generic component that
+contains a standardized interface (in the form of a set of methods) onto
+particular device types.
+
+The device subsystem defines all the methods and provides method
+services for not only device operations but also for the frame, window,
+menubar, scrollbar, toolbar, and other displayable-object subsystems.
+The reason for this is that all of these subsystems have the same
+subtypes (X, TTY, NeXTstep, Microsoft Windows, etc.) as devices do.
+
+
+
+@example
+@file{frame-msw.c}
+@file{frame-tty.c}
+@file{frame-x.c}
+@file{frame.c}
+@file{frame.h}
+@end example
+
+Each device contains one or more frames in which objects (e.g. text) are
+displayed.  A frame corresponds to a window in the window system;
+usually this is a top-level window but it could potentially be one of a
+number of overlapping child windows within a top-level window, using the
+MDI (Multiple Document Interface) protocol in Microsoft Windows or a
+similar scheme.
+
+The @file{frame-*} files implement the @dfn{frame} Lisp object type and
+provide the generic and device-type-specific operations on frames
+(e.g. raising, lowering, resizing, moving, etc.).
+
+
+
+@example
+@file{window.c}
+@file{window.h}
+@end example
+
+@cindex window (in Emacs)
+@cindex pane
+Each frame consists of one or more non-overlapping @dfn{windows} (better
+known as @dfn{panes} in standard window-system terminology) in which a
+buffer's text can be displayed.  Windows can also have scrollbars
+displayed around their edges.
+
+@file{window.c} and @file{window.h} implement the @dfn{window} Lisp
+object type and provide code to manage windows.  Since windows have no
+associated resources in the window system (the window system knows only
+about the frame; no child windows or anything are used for XEmacs
+windows), there is no device-type-specific code here; all of that code
+is part of the redisplay mechanism or the code for particular object
+types such as scrollbars.
+
 @node The Redisplay Mechanism, Extents, Consoles; Devices; Frames; Windows, Top
 @chapter The Redisplay Mechanism
 @cindex redisplay mechanism, the
@@ -10135,12 +13387,14 @@
 @end enumerate
 
 @menu
-* Critical Redisplay Sections::
-* Line Start Cache::
-* Redisplay Piece by Piece::
+* Critical Redisplay Sections::  
+* Line Start Cache::            
+* Redisplay Piece by Piece::    
+* Modules for the Redisplay Mechanism::  
+* Modules for other Display-Related Lisp Objects::  
 @end menu
 
-@node Critical Redisplay Sections
+@node Critical Redisplay Sections, Line Start Cache, The Redisplay Mechanism, The Redisplay Mechanism
 @section Critical Redisplay Sections
 @cindex redisplay sections, critical
 @cindex critical redisplay sections
@@ -10173,7 +13427,7 @@
 #### If a frame-size change does occur we should probably
 actually be preempting redisplay.
 
-@node Line Start Cache
+@node Line Start Cache, Redisplay Piece by Piece, Critical Redisplay Sections, The Redisplay Mechanism
 @section Line Start Cache
 @cindex line start cache
 
@@ -10234,7 +13488,7 @@
   In case you're wondering, the Second Golden Rule of Redisplay is not
 applicable.
 
-@node Redisplay Piece by Piece
+@node Redisplay Piece by Piece, Modules for the Redisplay Mechanism, Line Start Cache, The Redisplay Mechanism
 @section Redisplay Piece by Piece
 @cindex redisplay piece by piece
 
@@ -10285,6 +13539,173 @@
 @code{create_text_block} to do with cursor handling and selective
 display have been removed.
 
+@node Modules for the Redisplay Mechanism, Modules for other Display-Related Lisp Objects, Redisplay Piece by Piece, The Redisplay Mechanism
+@section Modules for the Redisplay Mechanism
+@cindex modules for the redisplay mechanism
+@cindex redisplay mechanism, modules for the
+
+@example
+@file{redisplay-output.c}
+@file{redisplay-msw.c}
+@file{redisplay-tty.c}
+@file{redisplay-x.c}
+@file{redisplay.c}
+@file{redisplay.h}
+@end example
+
+These files provide the redisplay mechanism.  As with many other
+subsystems in XEmacs, there is a clean separation between the general
+and device-specific support.
+
+@file{redisplay.c} contains the bulk of the redisplay engine.  These
+functions update the redisplay structures (which describe how the screen
+is to appear) to reflect any changes made to the state of any
+displayable objects (buffer, frame, window, etc.) since the last time
+that redisplay was called.  These functions are highly optimized to
+avoid doing more work than necessary (since redisplay is called
+extremely often and is potentially a huge time sink), and depend heavily
+on notifications from the objects themselves that changes have occurred,
+so that redisplay doesn't explicitly have to check each possible object.
+The redisplay mechanism also contains a great deal of caching to further
+speed things up; some of this caching is contained within the various
+displayable objects.
+
+@file{redisplay-output.c} goes through the redisplay structures and converts
+them into calls to device-specific methods to actually output the screen
+changes.
+
+@file{redisplay-x.c} and @file{redisplay-tty.c} are two implementations
+of these redisplay output methods, for X frames and TTY frames,
+respectively.
+
+
+
+@example
+@file{indent.c}
+@end example
+
+This module contains various functions and Lisp primitives for
+converting between buffer positions and screen positions.  These
+functions call the redisplay mechanism to do most of the work, and then
+examine the redisplay structures to get the necessary information.  This
+module needs work.
+
+
+
+@example
+@file{termcap.c}
+@file{terminfo.c}
+@file{tparam.c}
+@end example
+
+These files contain functions for working with the termcap (BSD-style)
+and terminfo (System V style) databases of terminal capabilities and
+escape sequences, used when XEmacs is displaying in a TTY.
+
+
+
+@example
+@file{cm.c}
+@file{cm.h}
+@end example
+
+These files provide some miscellaneous TTY-output functions and should
+probably be merged into @file{redisplay-tty.c}.
+
+
+
+@node Modules for other Display-Related Lisp Objects,  , Modules for the Redisplay Mechanism, The Redisplay Mechanism
+@section Modules for other Display-Related Lisp Objects
+@cindex modules for other display-related Lisp objects
+@cindex display-related Lisp objects, modules for other
+@cindex Lisp objects, modules for other display-related
+
+@example
+@file{faces.c}
+@file{faces.h}
+@end example
+
+
+
+@example
+@file{bitmaps.h}
+@file{glyphs-eimage.c}
+@file{glyphs-msw.c}
+@file{glyphs-msw.h}
+@file{glyphs-widget.c}
+@file{glyphs-x.c}
+@file{glyphs-x.h}
+@file{glyphs.c}
+@file{glyphs.h}
+@end example
+
+
+
+@example
+@file{objects-msw.c}
+@file{objects-msw.h}
+@file{objects-tty.c}
+@file{objects-tty.h}
+@file{objects-x.c}
+@file{objects-x.h}
+@file{objects.c}
+@file{objects.h}
+@end example
+
+
+
+@example
+@file{menubar-msw.c}
+@file{menubar-msw.h}
+@file{menubar-x.c}
+@file{menubar.c}
+@file{menubar.h}
+@end example
+
+
+
+@example
+@file{scrollbar-msw.c}
+@file{scrollbar-msw.h}
+@file{scrollbar-x.c}
+@file{scrollbar-x.h}
+@file{scrollbar.c}
+@file{scrollbar.h}
+@end example
+
+
+
+@example
+@file{toolbar-msw.c}
+@file{toolbar-x.c}
+@file{toolbar.c}
+@file{toolbar.h}
+@end example
+
+
+
+@example
+@file{font-lock.c}
+@end example
+
+This file provides C support for syntax highlighting---i.e.
+highlighting different syntactic constructs of a source file in
+different colors, for easy reading.  The C support is provided so that
+this is fast.
+
+
+
+@example
+@file{dgif_lib.c}
+@file{gif_err.c}
+@file{gif_lib.h}
+@file{gifalloc.c}
+@end example
+
+These modules decode GIF-format image files, for use with glyphs.
+These files were removed due to Unisys patent infringement concerns.
+
+
 @node Extents, Faces, The Redisplay Mechanism, Top
 @chapter Extents
 @cindex extents
@@ -10298,7 +13719,7 @@
 * Extent Fragments::            Cached information useful for redisplay.
 @end menu
 
-@node Introduction to Extents
+@node Introduction to Extents, Extent Ordering, Extents, Extents
 @section Introduction to Extents
 @cindex extents, introduction to
 
@@ -10321,7 +13742,7 @@
 however, and just ended up complexifying and buggifying all the
 rest of the code.)
 
-@node Extent Ordering
+@node Extent Ordering, Format of the Extent Info, Introduction to Extents, Extents
 @section Extent Ordering
 @cindex extent ordering
 
@@ -10356,7 +13777,7 @@
 all occurrences of ``display order'' and ``e-order'', ``less than'' and
 ``greater than'', and ``extent start'' and ``extent end''.
 
-@node Format of the Extent Info
+@node Format of the Extent Info, Zero-Length Extents, Extent Ordering, Extents
 @section Format of the Extent Info
 @cindex extent info, format of the
 
@@ -10419,7 +13840,7 @@
 are not as good, and repeated localized operations will be slower than
 for a gap array).  Such code is quite tricky to write, however.
 
-@node Zero-Length Extents
+@node Zero-Length Extents, Mathematics of Extent Ordering, Format of the Extent Info, Extents
 @section Zero-Length Extents
 @cindex zero-length extents
 @cindex extents, zero-length
@@ -10450,7 +13871,7 @@
 exactly like markers and that open-closed, non-detachable zero-length
 extents behave like the ``point-type'' marker in Mule.
 
-@node Mathematics of Extent Ordering
+@node Mathematics of Extent Ordering, Extent Fragments, Zero-Length Extents, Extents
 @section Mathematics of Extent Ordering
 @cindex mathematics of extent ordering
 @cindex extent mathematics
@@ -10578,7 +13999,7 @@
 @math{S}, including @math{F}.  Otherwise, @math{F2} includes @math{I}
 and thus is in @math{S}, and thus @math{F2 >= F}.
 
-@node Extent Fragments
+@node Extent Fragments,  , Mathematics of Extent Ordering, Extents
 @section Extent Fragments
 @cindex extent fragments
 @cindex fragments, extent
@@ -10761,6 +14182,10 @@
 
 Not yet documented.
 
+Specifiers are documented in depth in the Lisp Reference manual.
+@xref{Specifiers,,, lispref, XEmacs Lisp Reference Manual}.  The code in
+@file{specifier.c} is pretty straightforward.
+
 @node Menus, Subprocesses, Specifiers, Top
 @chapter Menus
 @cindex menus
@@ -10814,7 +14239,7 @@
 its argument, which is the callback function or form given in the menu's
 description.
 
-@node Subprocesses, Interface to the X Window System, Menus, Top
+@node Subprocesses, Interface to MS Windows, Menus, Top
 @chapter Subprocesses
 @cindex subprocesses
 
@@ -10888,7 +14313,544 @@
 or @code{nil} if it is using pipes.
 @end table
 
-@node Interface to the X Window System, Index, Subprocesses, Top
+@node Interface to MS Windows, Interface to the X Window System, Subprocesses, Top
+@chapter Interface to MS Windows
+@cindex MS Windows, interface to
+@cindex Windows, interface to
+
+@menu
+* Different kinds of Windows environments::  
+* Windows Build Flags::         
+* Windows I18N Introduction::   
+* Modules for Interfacing with MS Windows::  
+@end menu
+
+@node Different kinds of Windows environments, Windows Build Flags, Interface to MS Windows, Interface to MS Windows
+@section Different kinds of Windows environments
+@cindex different kinds of Windows environments
+@cindex Windows environments, different kinds of
+@cindex MS Windows environments, different kinds of
+
+@subsubheading (a) operating system (OS) vs. window system vs. Win32 API vs. C runtime library (CRT) vs. and compiler
+
+There are various Windows operating systems (Windows NT, 2000, XP, 95,
+98, ME, etc.), which come in two basic classes: Windows NT (NT, 2000,
+XP, and all future versions) and 9x (95, 98, ME).  9x-class operating
+systems are a kind of hodgepodge of a 32-bit upper layer on top of a
+16-bit MS-DOS-compatible lower layer.  NT-class operating systems are
+written from the ground up as 32-bit (there are also 64-bit versions
+available now), and provide many more features and much greater
+stability, since there is full memory protection between all processes
+and the between processes and the system.  NT-class operating systems
+also provide emulation for DOS programs inside of a "sandbox" (i.e. a
+walled-off environment in which one DOS program can screw up another
+one, but there is theoretically no way for a DOS program to screw up the
+OS itself).  From the perspective of XEmacs, the different between NT
+and 9x is very important in Unicode support (not really provided under
+9x -- see @file{intl-win32.c}) and subprocess creation, among other things.
+
+The operating system provides the framework for accessing files and
+devices and running programs.  From the perspective of a program, the
+operating system provides a set of services.  At the lowest level, the
+way to call these services is dependent on the processor the OS is
+running on, but a portable interface is provided to C programs through
+functions called "system calls".  Under Windows, this interface is called
+the Win32 API, and includes file-manipulation calls such as @code{CreateFile()}
+and @code{ReadFile()}, process-creation calls such as @code{CreateProcess()}, etc.
+
+This concept of system calls goes back to Unix, where similar services
+are available but through routines with different, simpler names, such
+as @code{open()}, @code{read()}, @code{fork()}, @code{execve()}, etc.  In addition, Unix provides
+a higher layer of routines, called the C Runtime Library (CRT), which
+provide higher-level, more convenient versions of the same services (e.g.
+"stream-oriented" file routines such as @code{fopen()} and @code{fread()}) as well
+as various other utility functions, such as string-manipulation routines
+(e.g. @code{strcpy()} and @code{strcmp()}).
+
+For compatibility, a C Runtime Library (CRT) is also provided under
+Windows, which provides a partial implementation of both the Unix CRT
+and the Unix system-call API, implemented using the Win32 API.  The CRT
+sources come with Visual C++ (VC++).  For example, under VC++ 6, look in
+the CRT/SRC directory, e.g. for me (ben): /Program Files/Microsoft
+Visual Studio/VC98/CRT/SRC. The CRT is provided using either MSVCRT
+(dynamically linked) or @file{LIBC.LIB} (statically linked).
+
+The window system provides the framework for creating overlapped windows
+and unifying signals provided by various devices (input devices such as
+the keyboard and mouse, timers, etc.) into a single event queue (or
+"message queue", under Windows).  Like the operating system, the window
+system can be viewed from the perspective of a program as a set of
+services provided by an API of function calls.  Under Windows,
+window-system services are also available through the Win32 API, while
+under UNIX the window system is typically a separate component (e.g. the
+X Windowing System, aka X Windows or X11).  The term "GUI" ("graphical
+user interface") is often used to refer to the services provided by the
+window system, or to a windowing interface provided by a program.
+
+The Win32 API is implemented by various dynamic libraries, or DLL's.
+The most important are KERNEL32, USER32, and GDI32.  KERNEL32 implements
+the basic file-system and process services.  USER32 implements the
+fundamental window-system services such as creating windows and handling
+messages.  GDI32 implements higher-level drawing capabilities -- fonts,
+colors, lines, etc.
+
+C programs are compiled into executables using a compiler.  Under Unix,
+a compiler usually comes as part of the operating system, but not under
+Windows, where the compiler is a separate product.  Even under Unix,
+people often install their own compilers, such as gcc.  Under Windows,
+the Microsoft-standard compiler is Visual C++ (VC++).
+
+It is possible to provide an emulation of any API using any other, as
+long as the underlying API provides the suitable functionality.  This is
+what Cygwin (www.cygwin.com) does.  It provides a fairly complete POSIX
+emulation layer (POSIX is a government standard for Unix behavior) on
+top of MS Windows -- in particular, providing the file-system, process,
+tty, and signal semantics that are part of a modern, standard Unix
+operating system.  Cygwin does this using its own DLL, @file{cygwin1.dll},
+which makes calls to the Win32 API services in @file{kernel32.dll}.  Cygwin
+also provides its own implementation of the C runtime library, called
+@code{newlib} (@file{libcygwin.a}; @file{libc.a} and @file{libm.a} are symlinked to it), which is
+implemented on top of the Unix system calls provided in @file{cygwin1.dll}.  In
+addition, Cygwin provides static import libraries that give you direct
+access to the Win32 API -- XEmacs uses this to provide GUI support under
+Cygwin.  Cygwin provides a version of GCC (the GNU Project C compiler)
+that is set up to automatically link with the appropriate Cygwin
+libraries.  Cygwin also provides, as optional components, pre-compiled
+binaries for a great number of open-source programs compiled under the
+Cygwin environment.  This includes all of the standard Unix file-system,
+text-manipulation, development, networking, database, etc. utilities, a
+version of X Windows that uses the Win32 API underlyingly (see below),
+and compilations of nearly all other common open-source packages
+(Apache, TeX, [X]Emacs, Ghostscript, GTK, ImageMagick, etc.).
+
+Similarly, you can emulate the functionality of X Windows using the
+Win32 component of the Win32 API.  Cygwin provides a package to do this,
+from the XFree86 project.  Other versions of X under Windows also exist,
+such as the MicroImages MI/X server.  Each version potentially can come
+comes with its own header and library files, allowing you to compile
+X-Windows programs.
+
+All of these different operating system and emulation layers can make
+for a fair amount of confusion, so:
+
+@subsubheading (b) CRT is not the same as VC++
+
+Note that the CRT is @strong{NOT} (completely) part of VC++.  True, if you link
+statically, the CRT (in the form of @file{LIBC.LIB}, which comes with VC++)
+will be inserted into the executable (.EXE), but otherwise the CRT will
+be separate.  The dynamic version of the CRT is provided by @file{MSVCRT.DLL}
+(or @file{MSVCRTD.DLL}, for debugging), which comes with Windows.  Hence, it's
+possible to use a different compiler and still link with MSVCRT -- which
+is exactly what MinGW does.
+
+@subsubheading (c) CRT is not the same as the Win32 API
+
+Note also that the CRT is totally separate from the Win32 API.  They
+provide different functions and are implemented in different DLL's.
+They are also different levels -- the CRT is implemented on top of
+Win32.  Sometimes the CRT and Win32 both have their own versions of
+similar concepts, such as locales.  These are typically maintained
+separately, and can get out of sync.  Do not assume that changing a
+setting in the CRT will have any effect on Win32 API routines using a
+similar concept unless the CRT docs specifically say so.  Do not assume
+that behavior described for CRT functions applies to Win32 API or
+vice-versa.  Note also that the CRT knows about and is implemented on
+top of the Win32 API, while the Win32 API knows nothing about the CRT.
+
+@subsubheading (d) MinGW is not the same as Cygwin
+
+As described in (b), Microsoft's version of the CRT (@file{MSVCRT.DLL}) is
+provided as part of Windows, separate from VC++, which must be
+purchased.  Hence, it is possible to write MSVCRT to provide CRT
+services without using VC++.  This is what MinGW (www.mingw.org) does --
+it is a port of GCC that will use MSVCRT.  The reason one might want to
+do this is (a) it is free, and (b) it does not require a separately
+installed DLL, as Cygwin does. (#### Maybe MinGW targets CRTDLL, not
+MSVCRT?  If so, what is CRTDLL, and how does it differ from MSVCRT and
+@file{LIBC.LIB}?) Primarily, what MinGW provides is patches to GCC (now
+integrated into the standard distribution) and its own header files and
+import libraries that are compatible with MSVCRT.  The best way to think
+of MinGW is as simply another Windows compiler, like how there used to
+be Microsoft and Borland compilers.  Because MinGW programs use all the
+same libraries as VC++ programs, and hence the same services are
+available, programs that compile under VC++ should compile under MinGW
+with very little change, whereas programs that compile under Cygwin will
+look quite different.
+
+The confusion between MinGW and Cygwin is the confusion between the
+environment that a compiler runs under and the target environment of a
+program, i.e. the environment that a program is compiled to run under.
+It's theoretically possible, for example, to compile a program under
+Windows and generate a binary that can only be run under Linux, or
+vice-versa -- or, for that matter, to use Windows, running on an Intel
+machine to write and a compile a program that will run on the Mac OS,
+running on a PowerPC machine.  This is called cross-compiling, and while
+it may seem rather esoteric, it is quite normal when you want to
+generate a program for a machine that you cannot develop on -- for
+example, a program that will run on a Palm Pilot.  Originally, this is
+how MinGW worked -- you needed to run GCC under a Cygwin environment and
+give it appropriate flags, telling it to use the MinGW headers and
+target @file{MSVCRT.DLL} rather than @file{CYGWIN1.DLL}. (In fact,
+Cygwin standardly comes with MinGW's header files.) This was because GCC
+was written with Unix in mind and relied on a large amount of
+Unix-specific functionality.  To port GCC to Windows without using a
+POSIX emulation layer would mean a lot of rewriting of GCC.  Eventually,
+however, this was done, and it GCC was itself compiled using MinGW.  The
+result is that currently you can develop MinGW applications either under
+Cygwin or under native Windows.
+
+@subsubheading (e) Operating system is not the same as window system
+
+As per the above discussion, we can use either Native Windows (the OS
+part of Win32 provided by @file{KERNEL32.DLL} and the Windows CRT as
+provided by MSVCRT or CLL) or Cygwin to provide operating-system
+functionality, and we can use either Native Windows (the windowing part
+of Win32 as provided by @file{USER32.DLL} and @file{GDI32.DLL}) or X11
+to provide window-system functionality.  This gives us four possible
+build environments.  It's currently possible to build XEmacs with at
+least three of these combinations -- as far as I know native + X11 is no
+longer supported, although it used to be (support used to exist in
+@file{xemacs.mak} for linking with some X11 libraries available from
+somewhere, but it was bit-rotting and you could always use Cygwin; ####
+what happens if we try to compile with MinGW, native OS + X11?).  This
+may still seem confusing, so:
+
+@table @asis
+@item Native OS + native windowing
+We call @code{CreateProcess()} to run subprocesses
+(@file{process-nt.c}), and @code{CreateWindowEx()} to create a top-level
+window (@file{frame-msw.c}).  We use @file{nt/xemacs.mak} to compile
+with VC++, linking with the Windows CRT (@file{MSVCRT.DLL} or
+@file{LIBC.LIB}) and with the various Win32 DLL's (@file{KERNEL32.DLL},
+@file{USER32.DLL}, @file{GDI32.DLL}); or we use
+@file{src/Makefile[.in.in]} to compile with GCC, telling it
+(e.g. -mno-cygwin, see @file{s/mingw32.h}) to use MinGW (which will end
+up linking with @file{MSVCRT.DLL}), and linking GCC with -lshell32
+-lgdi32 -luser32 etc. (see @file{configure.in}).
+
+@item Cygwin + native windowing 
+We call @code{fork()}/@code{execve()} to run subprocesses
+(@file{process-unix.c}), and @code{CreateWindowEx()} to create a
+top-level window (@file{frame-msw.c}).  We use
+@file{src/Makefile[in.in]} to compile with GCC (it will end up linking
+with @file{CYGWIN1.DLL}) and link GCC with -lshell32 -lgdi32 -luser32
+etc. (see @file{configure.in}).
+
+@item Cygwin + X11
+We call @code{fork()}/@code{execve()} to run subprocesses
+(@file{process-unix.c}), and @code{XtCreatePopupShell()} to create a
+top-level window (@file{frame-x.c}).  We use @file{src/Makefile[.in.in]}
+to compile with GCC (it will end up linking with @file{CYGWIN1.DLL}) and
+link GCC with -lXt, -lX11, etc. (see @file{configure.in}).
+
+Finally, if native OS + X11 were possible, it might look something like
+
+@item [Native OS + X11]
+We call @code{CreateProcess()} to run subprocesses
+(@file{process-nt.c}), and @code{XtCreatePopupShell()} to create a
+top-level window (@file{frame-x.c}).  We use @file{nt/xemacs.mak} to
+compile with VC++, linking with the Windows CRT (@file{MSVCRT.DLL} or
+@file{LIBC.LIB}) and with the various X11 DLL's (@file{XT.DLL},
+@file{XLIB.DLL}, etc.); or we use @file{src/Makefile[.in.in]} to compile with
+GCC, telling it (e.g. -mno-cygwin, see @file{s/mingw32.h}) to use MinGW
+(which will end up linking with @file{MSVCRT.DLL}), and linking GCC with
+-lXt, -lX11, etc. (see @file{configure.in}).
+@end table
+
+One of the reasons that we maintain the ability to build under Cygwin
+and X11 on Windows, when we have native support, is that it allows
+Windows compilers to test under a Unix-like environment.
+
+@node Windows Build Flags, Windows I18N Introduction, Different kinds of Windows environments, Interface to MS Windows
+@section Windows Build Flags
+@cindex Windows build flags
+@cindex MS Windows build flags
+@cindex build flags, Windows
+
+@table @code
+@item CYGWIN
+for Cygwin-only stuff.
+@item WIN32_NATIVE
+Win32 native OS-level stuff (files, process, etc.).  Applies whenever
+linking against the native C libraries -- i.e.  all compilations with
+VC++ and with MINGW, but never Cygwin.
+@item HAVE_X_WINDOWS
+for X Windows (regardless of whether under MS Win)
+@item HAVE_MS_WINDOWS
+MS Windows native windowing system (anything related to the appearance
+of the graphical screen).  May or may not apply to any of VC++, MINGW,
+Cygwin.
+@end table
+
+Finally, there's also the MINGW build environment, which uses GCC
+(similar to Cygwin), but native MS Windows libraries rather than a
+POSIX emulation layer (the Cygwin approach).  This environment defines
+WIN32_NATIVE, but also defines MINGW, which is used mostly because
+uses its own include files (related to Cygwin), which have a few
+things messed up.
+
+Formerly, we had a whole host of flags.  Here's the conversion, for porting
+code from GNU Emacs and such:
+
+@c @multitable {Old Constant} {determine whether this code is really specific to MS-DOS (and not Windows -- e.g. DJGPP code}
+@multitable @columnfractions .25 .75
+@item Old Constant @tab New Constant
+@item ----------------------------------------------------------------
+@item @code{WINDOWSNT}
+@tab @code{WIN32_NATIVE}
+@item @code{WIN32}
+@tab @code{WIN32_NATIVE}
+@item @code{_WIN32}
+@tab @code{WIN32_NATIVE}
+@item @code{HAVE_WIN32}
+@tab @code{WIN32_NATIVE}
+@item @code{DOS_NT}
+@tab @code{WIN32_NATIVE}
+@item @code{HAVE_NTGUI}
+@tab @code{WIN32_NATIVE}, unless it ends up already bracketed by this
+@item @code{HAVE_FACES}
+@tab always true
+@item @code{MSDOS}
+@tab determine whether this code is really specific to MS-DOS (and not
+Windows -- e.g. DJGPP code); if so, delete the code; otherwise,
+convert to @code{WIN32_NATIVE} (we do not support MS-DOS w/DOS Extender
+under XEmacs)
+@item @code{__CYGWIN__}
+@tab @code{CYGWIN}
+@item @code{__CYGWIN32__}
+@tab @code{CYGWIN}
+@item @code{__MINGW32__}
+@tab @code{MINGW}
+@end multitable
+
+@node Windows I18N Introduction, Modules for Interfacing with MS Windows, Windows Build Flags, Interface to MS Windows
+@section Windows I18N Introduction
+@cindex Windows I18N
+@cindex I18N, Windows
+@cindex MS Windows I18N
+
+@strong{Abstract:} This page provides an overview of the aspects of the
+Win32 internationalization API that are relevant to XEmacs, including
+the basic distinction between multibyte and Unicode encodings.  Also
+included are pointers to how XEmacs should make use of this API.
+
+The Win32 API is quite well-designed in its handling of strings encoded
+for various character sets.  The API is geared around the idea that two
+different methods of encoding strings should be supported.  These
+methods are called multibyte and Unicode, respectively.  The multibyte
+encoding is compatible with ASCII strings and is a more efficient
+representation when dealing with strings containing primarily ASCII
+characters, but it has a great number of serious deficiencies and
+limitations, including that it is very difficult and error-prone to work
+with strings in this encoding, and any particular string in a multibyte
+encoding can only contain characters from a very limited number of
+character sets.  The Unicode encoding rectifies all of these
+deficiencies, but it is not compatible with ASCII strings (in other
+words, an existing program will not be able to handle the encoded
+strings unless it is explicitly modified to do so), and it takes up
+twice as much memory space as multibyte encodings when encoding a purely
+ASCII string.
+
+Multibyte encodings use a variable number of bytes (either one or two)
+to represent characters.  ASCII characters are also represented by a
+single byte with its high bit not set, and non-ASCII characters are
+represented by one or two bytes, the first of which always has its high
+bit set.  (The second byte, when it exists, may or may not have its high
+bit set.)  There is no single multibyte encoding.  Instead, there is
+generally one encoding per non-ASCII character set.  Such an encoding is
+capable of representing (besides ASCII characters, of course) only
+characters from one (or possibly two) particular character sets.
+
+Multibyte encoding makes processing of strings very difficult.  For
+example, given a pointer to the beginning of a character within a
+string, finding the pointer to the beginning of the previous character
+may require backing up all the way to the beginning of the string, and
+then moving forward.  Also, an operation such as separating out the
+components of a path by searching for backslashes will fail if it's
+implemented in the simplest (but not multibyte-aware) fashion, because
+it may find what appears to be a backslash, but which is actually the
+second byte of a two-byte character.  Also, the limited number of
+character sets that any particular multibyte encoding can represent
+means that loss of data is likely if a string is converted from the
+XEmacs internal format into a multibyte format.
+
+For these reasons, the C code in XEmacs should never do any sort of work
+with multibyte encoded strings (or with strings in any external encoding
+for that matter).  Strings should always be maintained in the internal
+encoding, which is predictable, and converted to an external encoding
+only at the point where the string moves from the XEmacs C code and
+enters a system library function.  Similarly, when a string is returned
+from a system library function, it should be immediately converted into
+the internal coding before any operations are done on it.
+
+Unicode, unlike multibyte encodings, is a fixed-width encoding where
+every character is represented using 16 bits.  It is also capable of
+encoding all the characters from all the character sets in common use in
+the world.  The predictability and completeness of the Unicode encoding
+makes it a very good encoding for strings that may contain characters
+from many character sets mixed up with each other.  At the same time, of
+course, it is incompatible with routines that expect ASCII characters
+and also incompatible with general string manipulation routines, which
+will encounter a great number of what would appear to be embedded nulls
+in the string.  It also takes twice as much room to encode strings
+containing primarily ASCII characters.  This is why XEmacs does not use
+Unicode or similar encoding internally for buffers.
+
+The Win32 API cleverly deals with the issue of 8 bit vs. 16 bit
+characters by declaring a type called @code{@dfn{TCHAR}} which specifies
+a generic character, either 8 bits or 16 bits.  Generally @code{TCHAR}
+is defined to be the same as the simple C type @code{char}, unless the
+preprocessor constant @code{UNICODE} is defined, in which case
+@code{TCHAR} is defined to be @code{WCHAR}, which is a 16 bit type.
+Nearly all functions in the Win32 API that take strings are defined to
+take strings that are actually arrays of @code{TCHAR}s.  There is a type
+@code{LPTSTR} which is defined to be a string of @code{TCHAR}s and
+another type @code{LPCTSTR} which is a const string of @code{TCHAR}s.
+The theory is that any program that uses @code{TCHAR}s exclusively to
+represent characters and does not make assumptions about the size of a
+@code{TCHAR} or the way that the characters are encoded should work
+transparently regardless of whether the @code{UNICODE} preprocessor
+constant is defined, which is to say, regardless of whether 8 bit
+multibyte or 16 bit Unicode characters are being used.  The way that
+this is actually implemented is that every Win32 API function that takes
+a string as an argument actually maps to one of two functions which are
+suffixed with an @code{A} (which stands for ANSI, and means multibyte
+strings) or @code{W} (which stands for wide, and means Unicode strings).
+The mapping is, of course, controlled by the same @code{UNICODE}
+preprocessor constant.  Generally all structures containing strings in
+them actually map to one of two different kinds of structures, with
+either an @code{A} or a @code{W} suffix after the structure name.
+
+Unfortunately, not all of the implementations of the Win32 API
+implement all of the functionality described above.  In particular,
+Windows 95 does not implement very much Unicode functionality.  It
+does implement functions to convert multibyte-encoded strings to and
+from Unicode strings, and provides Unicode versions of certain
+low-level functions like @code{ExtTextOut()}.  In fact, all of
+the rest of the Unicode versions of API functions are just stubs that
+return an error.  Conversely, all versions of Windows NT completely
+implement all the Unicode functionality, but some versions (especially
+versions before Windows NT 4.0) don't implement much of the multibyte
+functionality.  For this reason, as well as for general code
+cleanliness, XEmacs needs to be written in such a way that it works
+with or without the @code{UNICODE} preprocessor constant being
+defined.
+
+Getting XEmacs to run when all strings are Unicode primarily
+involves removing any assumptions made about the size of characters.
+Remember what I said earlier about how the point of conversion between
+internally and externally encoded strings should occur at the point of
+entry or exit into or out of a library function.  With this in mind,
+an externally encoded string in XEmacs can be treated simply as an
+arbitrary sequence of bytes of some length which has no particular
+relationship to the length of the string in the internal encoding.
+
+#### The rest of this is @strong{out-of-date} and needs to be written
+to reference the actual coding systems or aliases that we currently use.
+
+[[ To facilitate this, the enum @code{external_data_format}, which is
+declared in @file{lisp.h}, is expanded to contain three new formats,
+which are @code{FORMAT_LOCALE}, @code{FORMAT_UNICODE} and
+@code{FORMAT_TSTR}.  @code{FORMAT_LOCALE} always causes encoding into a
+multibyte string consistent with the encoding of the current locale.
+The functions to handle locales are different under Unix and Windows and
+locales are a process property under Unix and a thread property under
+Windows, but the concepts are basically the same.  @code{FORMAT_UNICODE}
+of course causes encoding into Unicode and @code{FORMAT_TSTR} logically
+maps to either @code{FORMAT_LOCALE} or @code{FORMAT_UNICODE} depending
+on the @code{UNICODE} preprocessor constant.
+
+Under Unix the behavior of @code{FORMAT_TSTR} is undefined and this
+particular format should not be used.  Under Windows however
+@code{FORMAT_TSTR} should be used for pretty much all of the Win32 API
+calls.  The other two formats should only be used in particular APIs
+that specifically call for a multibyte or Unicode encoded string
+regardless of the @code{UNICODE} preprocessor constant.  String
+constants that are to be passed directly to Win32 API functions, such as
+the names of window classes, need to be bracketed in their definition
+with a call to the macro @code{TEXT}.  This awfully named macro, which
+comes out of the Win32 API, appropriately makes a string of either
+regular or wide chars, which is to say this string may be prepended with
+an @code{L} (causing it to be a wide string) depending on the
+@code{UNICODE} preprocessor constant.
+
+By the way, if you're wondering what happened to @code{FORMAT_OS}, I
+think that this format should go away entirely because it is too vague
+and should be replaced by more specific formats as they are defined.
+]]
+
+Use Qnative for Unix conversion, Qmswindows_tstr for Windows ...
+
+String constants that are to be passed directly to Win32 API functions,
+such as the names of window classes, need to be bracketed in their
+definition with a call to the macro XETEXT. This appropriately makes a
+string of either regular or wide chars, which is to say this string may be
+prepended with an L (causing it to be a wide string) depending on
+XEUNICODE_P.
+
+@node Modules for Interfacing with MS Windows,  , Windows I18N Introduction, Interface to MS Windows
+@section Modules for Interfacing with MS Windows
+@cindex modules for interfacing with MS Windows
+@cindex interfacing with MS Windows, modules for
+@cindex MS Windows, modules for interfacing with
+@cindex Windows, modules for interfacing with
+
+There are two different general Windows-related include files in src.
+
+Uses are approximately:
+
+@table @file
+@item syswindows.h
+Wrapper around @file{<windows.h>}, including missing defines as
+necessary.  Includes stuff needed on both Cygwin and native Windows,
+regardless of window system chosen.  Includes definitions needed for
+Unicode conversion/encapsulation, and other Mule-related stuff, plus
+various other prototypes and Windows-specific, but not GUI-specific,
+stuff.
+
+@item console-msw.h
+Used on both Cygwin and native Windows, but only when native window
+system (as opposed to X) chosen.  Includes @file{syswindows.h}.
+@end table
+
+Summary of files:
+
+@table @file
+@item console-msw.h
+include file for native windowing (otherwise, @file{console-x.h}, etc.)
+@item console-msw.c, frame-msw.c, etc.
+native windowing, as above
+@item process-nt.c
+subprocess support for native OS (otherwise, @file{process-unix.c})
+@item nt.c
+support routines used under native OS
+@item win32.c
+support routines used under both OS environments
+@item syswindows.h
+support header for both environments
+@item nt/xemacs.mak
+Makefile for VC++ (otherwise, @file{src/Makefile.in.in})
+@item s/windowsnt.h
+s header for basic native-OS defines, VC++ compiler
+@item s/mingw32.h
+s header for basic native-OS defines, GCC/MinGW compiler
+@item s/cygwin.h
+s header for basic Cygwin defines
+@item s/win32-native.h
+s header for basic native-OS defines, all compilers
+@item s/win32-common.h
+s header for defines for both OS environments
+@item intl-win32.c
+internationalization functions for both OS environments
+@item intl-encap-win32.c
+Unicode encapsulation functions for both OS environments
+@item intl-auto-encap-win32.c
+Auto-generated Unicode encapsulation functions
+@item intl-auto-encap-win32.h
+Auto-generated Unicode encapsulation headers
+@end table
+
+@node Interface to the X Window System, Future Work, Interface to MS Windows, Top
 @chapter Interface to the X Window System
 @cindex X Window System, interface to the
 
@@ -10896,9 +14858,10 @@
 
 @menu
 * Lucid Widget Library::        An interface to various widget sets.
+* Modules for Interfacing with X Windows::  
 @end menu
 
-@node Lucid Widget Library
+@node Lucid Widget Library, Modules for Interfacing with X Windows, Interface to the X Window System, Interface to the X Window System
 @section Lucid Widget Library
 @cindex Lucid Widget Library
 @cindex widget library, Lucid
@@ -10924,14 +14887,14 @@
 
 @menu
 * Generic Widget Interface::    The lwlib generic widget interface.
-* Scrollbars::
-* Menubars::
-* Checkboxes and Radio Buttons::
-* Progress Bars::
-* Tab Controls::
+* Scrollbars::                  
+* Menubars::                    
+* Checkboxes and Radio Buttons::  
+* Progress Bars::               
+* Tab Controls::                
 @end menu
 
-@node Generic Widget Interface
+@node Generic Widget Interface, Scrollbars, Lucid Widget Library, Lucid Widget Library
 @subsection Generic Widget Interface
 @cindex widget interface, generic
 
@@ -11012,30 +14975,5553 @@
 of its tree.  Widget instances are further confi
 
 
-@node Scrollbars
+@node Scrollbars, Menubars, Generic Widget Interface, Lucid Widget Library
 @subsection Scrollbars
 @cindex scrollbars
 
-@node Menubars
+@node Menubars, Checkboxes and Radio Buttons, Scrollbars, Lucid Widget Library
 @subsection Menubars
 @cindex menubars
 
-@node Checkboxes and Radio Buttons
+@node Checkboxes and Radio Buttons, Progress Bars, Menubars, Lucid Widget Library
 @subsection Checkboxes and Radio Buttons
 @cindex checkboxes and radio buttons
 @cindex radio buttons, checkboxes and
 @cindex buttons, checkboxes and radio
 
-@node Progress Bars
+@node Progress Bars, Tab Controls, Checkboxes and Radio Buttons, Lucid Widget Library
 @subsection Progress Bars
 @cindex progress bars
 @cindex bars, progress
 
-@node Tab Controls
+@node Tab Controls,  , Progress Bars, Lucid Widget Library
 @subsection Tab Controls
 @cindex tab controls
 
-@include index.texi
+
+@node Modules for Interfacing with X Windows,  , Lucid Widget Library, Interface to the X Window System
+@section Modules for Interfacing with X Windows
+@cindex modules for interfacing with X Windows
+@cindex interfacing with X Windows, modules for
+@cindex X Windows, modules for interfacing with
+
+@example
+Emacs.ad.h
+@end example
+
+A file generated from @file{Emacs.ad}, which contains XEmacs-supplied
+fallback resources (so that XEmacs has pretty defaults).
+
+
+
+@example
+EmacsFrame.c
+EmacsFrame.h
+EmacsFrameP.h
+@end example
+
+These modules implement an Xt widget class that encapsulates a frame.
+This is for ease in integrating with Xt.  The EmacsFrame widget covers
+the entire X window except for the menubar; the scrollbars are
+positioned on top of the EmacsFrame widget.
+
+@strong{Warning:} Abandon hope, all ye who enter here.  This code took
+an ungodly amount of time to get right, and is likely to fall apart
+mercilessly at the slightest change.  Such is life under Xt.
+
+
+
+@example
+EmacsManager.c
+EmacsManager.h
+EmacsManagerP.h
+@end example
+
+These modules implement a simple Xt manager (i.e. composite) widget
+class that simply lets its children set whatever geometry they want.
+It's amazing that Xt doesn't provide this standardly, but on second
+thought, it makes sense, considering how amazingly broken Xt is.
+
+
+@example
+EmacsShell-sub.c
+EmacsShell.c
+EmacsShell.h
+EmacsShellP.h
+@end example
+
+These modules implement two Xt widget classes that are subclasses of
+the TopLevelShell and TransientShell classes.  This is necessary to deal
+with more brokenness that Xt has sadistically thrust onto the backs of
+developers.
+
+
+
+@example
+xgccache.c
+xgccache.h
+@end example
+
+These modules provide functions for maintenance and caching of GC's
+(graphics contexts) under the X Window System.  This code is junky and
+needs to be rewritten.
+
+
+
+@example
+select-msw.c
+select-x.c
+select.c
+select.h
+@end example
+
+@cindex selections
+  This module provides an interface to the X Window System's concept of
+@dfn{selections}, the standard way for X applications to communicate
+with each other.
+
+
+
+@example
+xintrinsic.h
+xintrinsicp.h
+xmmanagerp.h
+xmprimitivep.h
+@end example
+
+These header files are similar in spirit to the @file{sys*.h} files and buffer
+against different implementations of Xt and Motif.
+
+@itemize @bullet
+@item
+@file{xintrinsic.h} should be included in place of @file{<Intrinsic.h>}.
+@item
+@file{xintrinsicp.h} should be included in place of @file{<IntrinsicP.h>}.
+@item
+@file{xmmanagerp.h} should be included in place of @file{<XmManagerP.h>}.
+@item
+@file{xmprimitivep.h} should be included in place of @file{<XmPrimitiveP.h>}.
+@end itemize
+
+
+
+@example
+xmu.c
+xmu.h
+@end example
+
+These files provide an emulation of the Xmu library for those systems
+(i.e. HPUX) that don't provide it as a standard part of X.
+
+
+
+@example
+ExternalClient-Xlib.c
+ExternalClient.c
+ExternalClient.h
+ExternalClientP.h
+ExternalShell.c
+ExternalShell.h
+ExternalShellP.h
+extw-Xlib.c
+extw-Xlib.h
+extw-Xt.c
+extw-Xt.h
+@end example
+
+@cindex external widget
+  These files provide the @dfn{external widget} interface, which allows an
+XEmacs frame to appear as a widget in another application.  To do this,
+you have to configure with @samp{--external-widget}.
+
+@file{ExternalShell*} provides the server (XEmacs) side of the
+connection.
+
+@file{ExternalClient*} provides the client (other application) side of
+the connection.  These files are not compiled into XEmacs but are
+compiled into libraries that are then linked into your application.
+
+@file{extw-*} is common code that is used for both the client and server.
+
+Don't touch this code; something is liable to break if you do.
+
+
+@node Future Work, Future Work Discussion, Interface to the X Window System, Top
+@chapter Future Work
+@cindex future work
+
+@menu
+* Future Work -- Elisp Compatibility Package::  
+* Future Work -- Drag-n-Drop::  
+* Future Work -- Standard Interface for Enabling Extensions::  
+* Future Work -- Better Initialization File Scheme::  
+* Future Work -- Keyword Parameters::  
+* Future Work -- Property Interface Changes::  
+* Future Work -- Toolbars::     
+* Future Work -- Menu API Changes::  
+* Future Work -- Removal of Misc-User Event Type::  
+* Future Work -- Mouse Pointer::  
+* Future Work -- Extents::      
+* Future Work -- Version Number and Development Tree Organization::  
+* Future Work -- Improvements to the @code{xemacs.org} Website::  
+* Future Work -- Keybindings::  
+* Future Work -- Byte Code Snippets::  
+* Future Work -- Lisp Stream API::  
+* Future Work -- Multiple Values::  
+* Future Work -- Macros::       
+* Future Work -- Specifiers::   
+* Future Work -- Display Tables::  
+* Future Work -- Making Elisp Function Calls Faster::  
+* Future Work -- Lisp Engine Replacement::  
+@end menu
+
+@ignore
+Macro to convert a single line containing a heading into the format of
+all headings in the Future Work section.
+
+(setq last-kbd-macro (read-kbd-macro
+"<S-end> <f3> <home> @node SPC <end> RET @section SPC <f4> <home> <up> <C-right> <right> Future SPC Work SPC - - SPC <home> <down> <C-right> <right> Future SPC Work SPC - - SPC <end> RET @cindex SPC future SPC work, SPC <f4> C-r , RET C-x C-x M-l RET @cindex SPC <f4> <home> <C-right> <S-end> M-l , SPC future SPC work RET"))
+@end ignore
+
+@node Future Work -- Elisp Compatibility Package, Future Work -- Drag-n-Drop, Future Work, Future Work
+@section Future Work -- Elisp Compatibility Package
+@cindex future work, elisp compatibility package
+@cindex elisp compatibility package, future work
+
+A while ago I created a package called Sysdep, which aimed to be a
+forward compatibility package for Elisp.  The idea was that instead of
+having to write your package using the oldest version of Emacs that you
+wanted to support, you could use the newest XEmacs API, and then simply
+load the Sysdep package, which would automatically define the new API in
+terms of older APIs as necessary.  The idea of this package was good,
+but its design wasn't perfect, and it wasn't widely adopted.  I propose
+a new package called Compat that corrects the design flaws in Sysdep,
+and hopefully will be adopted by most of the major packages.
+
+In addition, this package will provide macros that can be used to
+bracket code as necessary to disable byte compiler warnings generated as
+a result of supporting the APIs of different versions of Emacs; or
+rather the Compat package strives to provide useful constructs to make
+doing this support easier, and these constructs have the side effect of
+not causing spurious byte compiler warnings.  The idea here is that it
+should be possible to create well-written, clean, and understandable
+Elisp that supports both older and newer APIs, and has no byte compiler
+warnings.  Currently many warnings are unavoidable, and as a result,
+they are simply ignored, which also causes a lot of legitimate warnings
+to be ignored.
+
+The approach taken by the Sysdep package to make sure that the newest
+API was always supported was fairly simple: when the Sysdep package was
+loaded, it checked for the existence of new API functions, and if they
+weren't defined, it defined them in terms of older API functions that
+were defined.  This had the advantage that the checks for which API
+functions were defined were done only once at load time rather than each
+time the function was called.  However, the fact that the new APIs were
+globally defined caused a lot of problems with unwanted interactions,
+both with other versions of the Sysdep package provided as part of other
+packages, and simply with compatibility code of other sorts in packages
+that would determine whether an API existed by checking for the
+existence of certain functions within that API.  In addition, the Sysdep
+package did not scale well because it defined all of the functions that
+it supported, regardless of whether or not they were used.
+
+The Compat package remedies the first problem by ensuring that the new
+APIs are defined only within the lexical scope of the packages that
+actually make use of the Compat package.  It remedies the second problem
+by ensuring that only definitions of functions that are actually used
+are loaded.  This all works roughly according to the following scheme:
+
+@enumerate
+@item 
+
+Part of the Compat package is a module called the Compat generator.
+This module is actually run as an additional step during byte
+compilation of a package that uses Compat.  This can happen either
+through the makefile or through the use of an @code{eval-when-compile}
+call within the package code itself.  What the generator does is scan
+all of the Lisp code in the package, determine which function calls are
+made that the Compat package knows about, and generates custom
+@code{compat} code that conditionally defines just these functions when
+the package is loaded.  The custom @code{compat} code can either be
+written to a separate Lisp file (for use with multi-file packages), or
+inserted into the beginning of the Lisp file of a single file package.
+(In the latter case, the package indicates where this generated code
+should go through the use of magic comments that mark the beginning and
+end of the section.  Some will say that doing this trick is bad juju,
+but I have done this sort of thing before, and it works very well in
+practice).
+@item 
+
+The functions in the custom @code{compat} code have their names prefixed
+with both the name of the package and the word @code{compat}, ensuring
+that there will be no name space conflicts with other functions in the
+same package, or with other packages that make use of the Compat
+package.
+@item 
+
+The actual definitions of the functions in the custom @code{compat} code
+are determined at run time.  When the equivalent API already exists, the
+wrapper functions are simply defined directly in terms of the actual
+functions, so that the only run time overhead from using the Compat
+package is one additional function call.  (Alternatively, even this
+small overhead could be avoided by retrieving the definitions of the
+actual functions and supplying them as the definitions of the wrapper
+functions.  However, this appears to me to not be completely safe.  For
+example, it might have bad interactions with the advice package).
+@item 
+
+The code that wants to make use of the custom @code{compat} code is
+bracketed by a call to the construct @code{compat-execute}.  What this
+actually does is lexically bind all of the function names that are being
+redefined with macro functions by using the Common Lisp macro macrolet.
+(The definition of this macro is in the CL package, but in order for
+things to work on all platforms, the definition of this macro will
+presumably have to be copied and inserted into the custom @code{compat}
+code).
+
+@end enumerate
+
+In addition, the Compat package should define the macro
+@code{compat-if-fboundp}.  Similar macros such as
+@code{compile-when-fboundp} and @code{compile-case-fboundp} could be
+defined using similar principles).  The @code{compat-if-fboundp} macro
+behaves just like an @code{(if (fboundp ...) ...)} clause when executed,
+but in addition, when it's compiled, it ensures that the code inside the
+@code{if-true} sub-block will not cause any byte compiler warnings about
+the function in question being unbound.  I think that the way to
+implement this would be to make @code{compat-if-fboundp} be a macro that
+does what it's supposed to do, but which defines its own byte code
+handler, which ensures that the particular warning in question will be
+suppressed.  (Actually ensuring that just the warning in question is
+suppressed, and not any others, might be rather tricky.  It certainly
+requires further thought).
+
+Note: An alternative way of avoiding both warnings about unbound
+functions and warnings about obsolete functions is to just call the
+function in question by using @code{funcall}, instead of calling the
+function directly.  This seems rather inelegant to me, though, and
+doesn't make it obvious why the function is being called in such a
+roundabout manner.  Perhaps the Compat package should also provide a
+macro @code{compat-funcall}, which works exactly like @code{funcall},
+but which indicates to anyone reading the code why the code is expressed
+in such a fashion.
+
+If you're wondering how to implement the part of the Compat generator
+where it scans Lisp code to find function calls for functions that it
+wants to do something about, I think the best way is to simply process
+the code using the Lisp function @code{read} and recursively descend any
+lists looking for function names as the first element of any list
+encountered.  This might extract out a few more functions than are
+actually called, but it is almost certainly safer than doing anything
+trickier like byte compiling the code, and attempting to look for
+function calls in the result.  (It could also be argued that the names
+of the functions should be extracted, not only from the first element of
+lists, but anywhere @code{symbol} occurs.  For example, to catch places
+where a function is called using @code{funcall} or @code{apply}.
+However, such uses of functions would not be affected by the surrounding
+macrolet call, and so there doesn't appear to be any point in extracting
+them).
+
+@uref{../../www.666.com/ben/default.htm,Ben Wing}
+
+@node Future Work -- Drag-n-Drop, Future Work -- Standard Interface for Enabling Extensions, Future Work -- Elisp Compatibility Package, Future Work
+@section Future Work -- Drag-n-Drop
+@cindex future work, drag-n-drop
+@cindex drag-n-drop, future work
+
+@strong{Abstract:} I propose completely redoing the drag-n-drop
+interface to make it powerful and extensible enough to support such
+concepts as drag over and drag under visuals and context menus invoked
+when a drag is done with the right mouse button, to allow drop handlers
+to be defined for all sorts of graphical elements including buffers,
+extents, mode lines, toolbar items, menubar items, glyphs, etc., and to
+allow different packages to add and remove drop handlers for the same
+drop sites without interfering with each other.  The changes are
+extensive enough that I think they can only be implemented in version
+22, and the drag-n-drop interface should remain experimental until then.
+
+The new drag-n-drop interface centers around the twin concepts of
+@dfn{drop site} and @dfn{drop handler}.  A @dfn{drop site} specifies a
+particular graphical element where an object can be dropped onto, and a
+@dfn{drop handler} encapsulates all of the behavior that happens when
+such an object is dragged over and dropped onto a drop site.
+
+Each drop site has an object associated with it which is passed to
+functions that are part of the drop handlers associated with that site.
+The type of this object depends on the graphical element that comprises
+the drop site.  The drop site object can be a buffer, an extent, a
+glyph, a menu path, a toolbar item path, etc.  (These last two object
+types are defined in @uref{lisp-interface.html,Lisp Interface Changes}
+in the sections on menu and toolbar API changes.  If we wanted to allow
+drops onto other kinds of drop sites, for example mode lines, we would
+have to create corresponding path objects).  Each such object type
+should be able to be accessed using the generalized property interface
+defined above, and should have a property called @code{drop-handlers}
+associated with it that specifies all of the drop handlers associated
+with the drop site.  Normally, this property is not accessed directly,
+but instead by using the drop handler API defined below, and Lisp
+packages should not make any assumptions about the format of the data
+contained in the @code{drop-handlers} property.
+
+Each drop handler has an object of type @code{drop-handler} associated
+with it, whose primary purpose is to be a container for the various
+properties associated with a particular drop handler.  These could
+include, for example, a function invoked when the drop occurs, a context
+menu invoked when a drop occurs as a result of a drag with the right
+mouse button, functions invoked when a dragged object enters, leaves, or
+moves within a drop site, the shape that the mouse pointer changes to
+when an object is dragged over a drop site that allows this particular
+object to be dropped onto it, the MIME types (actually a regular
+expression matching the MIME types) of the allowable objects that can be
+dropped onto the drop site, a @dfn{package tag} (a symbol specifying the
+package that created the drop handler, used for identification
+purposes), etc.  The drop handler object is passed to the functions that
+are invoked as a result of a drag or a drop, most likely indirectly as
+one of the properties of the drag or drop event passed to the function.
+Properties of a drop handler object are accessed and modified in the
+standard fashion using the generalized property interface.
+
+A drop handler is added to a drop site using the @code{add-drop-handler}
+function.  The drop handler itself can either be created separately
+using the @code{make-drop-handler} function and then passed in as one of
+the parameters to @code{add-drop-handler}, or it will be created
+automatically by the @code{add-drop-handler} function, if the drop
+handler argument is omitted, but keyword arguments corresponding to the
+valid keyword properties for a drop handler are specified in the
+@code{add-drop-handler} call.  Other functions, such as
+@code{find-drop-handler}, @code{add-drop-handler} (when specifying a
+drop handler before which the drop handler in question is to be added),
+@code{remove-drop-handler} etc. should be defined with obvious
+semantics.  All of these functions take or return a drop site object
+which, as mentioned above, can be one of several object types
+corresponding to graphical elements.  Defined drop handler functions
+locate a particular drop handler using either the @code{MIME-type} or
+@code{package-tag} property of the drop handler, as defined above.
+
+Logically, the drop handlers associated with a particular drop site are
+an ordered list.  The first drop handler whose specified MIME type
+matches the MIME type of the object being dragged or dropped controls
+what happens to this object.  This is important particularly because the
+specified MIME type of the drop handler can be a regular expression
+that, for example, matches all audio objects with any sub-type.
+
+In the current drag-n-drop API, there is a distinction made between
+objects with an associated MIME type and objects with an associated URL.
+I think that this distinction is arbitrary, and should not exist.  All
+objects should have a MIME type associated with them, and a new
+XEmacs-specific MIME type should be defined for URLs, file names,
+etc. as necessary.  I am not even sure that this is necessary, however,
+as the MIME specification may specify a general concept of a pointer or
+link to an object, which is exactly what we want.  Also in some cases
+(for example, the name of a file that is locally available), the pointer
+or link will have another MIME type associated with it, which is the
+type of the object that is being pointed to.  I am not quite sure how we
+should handle URL and file name objects being dragged, but I am positive
+that it needs to be integrated with the mechanism used when an object
+itself is being dragged or dropped.
+
+As is described in @uref{misc-user-event.html,a separate page}, the
+@code{misc-user-event} event type should be removed and split up into a
+number of separate event types.  Two such event types would be
+@code{drag-event} and @code{drop-event}.  A drop event is used when an
+object is actually dropped, and a drag event is used if a function is
+invoked as part of the dragging process.  (Such a function would
+typically be used to control what are called @dfn{drag under visuals},
+which are changes to the appearance of the drop site reflecting the fact
+that a compatible object is being dragged over it).  The drag events and
+drop events encapsulate all of the information that is pertinent to the
+drag or drop action occurring, including such information as the actual
+MIME type of the object in question, the drop handler that caused a
+function to be invoked, the mouse event (or possibly even a keyboard
+event) corresponding to the user's action that is causing the drag or
+drop, etc.  This event is always passed to any function that is invoked
+as a result of the drag or drop.  There should never be any need to
+refer to the @code{current-mouse-event} variable, and in fact, this
+variable should not be changed at all during a drag or a drop.
+
+@uref{../../www.666.com/ben/default.htm,Ben Wing}
+
+@node Future Work -- Standard Interface for Enabling Extensions, Future Work -- Better Initialization File Scheme, Future Work -- Drag-n-Drop, Future Work
+@section Future Work -- Standard Interface for Enabling Extensions
+@cindex future work, standard interface for enabling extensions
+@cindex standard interface for enabling extensions, future work
+
+@strong{Abstract:} Apparently, if you know the name of a package (for
+example, @code{fusion}), you can load it using the @code{require}
+function, but there's no standard way to turn it on or turn it off.  The
+only way to figure out how to do that is to go read the source file,
+where hopefully the comments at the start tell you the appropriate magic
+incantations that you need to run in order to turn the extension on or
+off.  There really needs to be standard functions, such as
+@code{enable-extension} and @code{disable-extension}, to do this sort of
+thing.  It seems like a glaring omission that this isn't currently
+present, and it's really surprising to me that nobody has remarked on
+this.
+
+The easy part of this is defining the interface, and I think it should
+be done as soon as possible.  When the package is loaded, it simply
+calls some standard function in the package system, and passes it the
+names of enable and disable functions, or perhaps just one function that
+takes an argument specifying whether to enable or disable.  In any case,
+this data is kept in a table which is used by the
+@code{enable-extension} and @code{disable-extension} function.  There
+should also be functions such as @code{extension-enabled-p} and
+@code{enabled-extension-list}, and so on with obvious semantics.  The
+hard part is actually getting packages to obey this standard interface,
+but this is mitigated by the fact that the changes needed to support
+this interface are so simple.
+
+I have been conceiving of these enabling and disabling functions as
+turning the feature on or off globally.  It's probably also useful to
+have a standard interface returning a extension on or off in just the
+particular buffer.  Perhaps then the appropriate interface would involve
+registering a single function that takes an argument that specifies
+various things, such as turn off globally, turn on globally, turn on or
+off in the current buffer, etc.
+
+Part of this interface should specify the correct way to define global
+key bindings.  The correct rule for this, of course, is that the key
+bindings should not happen when the package is loaded, which is often
+how things are currently done, but only when the extension is actually
+enabled.  The key bindings should go away when the extension is
+disabled.  I think that in order to support this properly, we should
+expand the keymap interface slightly, so that in addition to other
+properties associated with each key binding is a list of shadow
+bindings.  Then there should be a function called
+@code{define-key-shadowing}, which is just like @code{define-key} but
+which also remembers the previous key binding in a shadow list.  Then
+there can be another function, something like @code{undefine-key}, which
+restores the binding to the most recently added item on the shadow list.
+There are already hash tables associated with each key binding, and it
+should be easy to stuff additional values, such as a shadow list, into
+the hash table.  Probably there should also be functions called
+@code{global-set-key-shadowing} and @code{global-unset-key-shadowing}
+with obvious semantics.
+
+Once this interface is defined, it should be easy to expand the custom
+package so it knows about this interface.  Then it will be possible to
+put all sorts of extensions on the options menu so that they could be
+turned off and turned on very easily, and then when you save the options
+out to a file, the design settings for whether these extensions are
+enabled or not are saved out with it.  A whole lot of custom junk that's
+been added to a lot of different packages could be removed.  After doing
+this, we might want to think of a way to classify extensions according
+to how likely we think the user will want to use them.  This way we can
+avoid the problem of having a list of 100 extensions and the user not
+being able to figure out which ones might be useful.  Perhaps the most
+useful extensions would appear immediately on the extensions menu, and
+the less useful ones would appear in a submenu of that, and another
+submenu might contain even less useful extensions.  Of course the
+package authors might not be too happy with this, but the users probably
+will be.  I think this at least deserves a thought, although it's
+possible you might simply want to maintain a list on the web site of
+extensions and a judgment on first of all, how commonly a user might
+want this extension, and second of all, how well written and bug-free
+the package is.  Both of these sorts of judgments could be obtained by
+doing user surveys if need be.
+
+@uref{../../www.666.com/ben/default.htm,Ben Wing}
+
+@node Future Work -- Better Initialization File Scheme, Future Work -- Keyword Parameters, Future Work -- Standard Interface for Enabling Extensions, Future Work
+@section Future Work -- Better Initialization File Scheme
+@cindex future work, better initialization file scheme
+@cindex better initialization file scheme, future work
+
+@strong{Abstract:} A proposal is outlined for converting XEmacs to use
+the @code{.xemacs} subdirectory for its initialization files instead of
+putting them in the user's home directory.  In the process, a general
+pre-initialization scheme is created whereby all of the initialization
+parameters, such as the location of the initialization files, whether
+these files are loaded or not, where the initial frame is created,
+etc. that are currently specified by command line arguments, by
+environment variables, and other means, can be specified in a uniform
+way using Lisp code.  Reasonable default behavior for everything will
+still be provided, and the older, simpler means can be used if desired.
+Compatibility with the current location and name of the initialization
+file, and the current ill-chosen use for the @code{.xemacs} directory is
+maintained, and the problem of how to gracefully migrate a user from the
+old scheme into the new scheme while still allowing the user to use GNU
+Emacs or older versions of XEmacs is solved.  A proposal for changing
+the way that the initial frame is mapped is also outlined; this would
+allow the user's initialization file to control the way that the initial
+frame appears without resorting to hacks, while still making echo area
+messages visible as they appear, and allowing the user to debug errors
+in the initialization file.
+
+@subheading Principles in the new scheme
+
+@enumerate
+@item
+
+XEmacs has a defined @dfn{pre-initialization process}.  This process,
+whose purpose is to compute the values of the parameters that control
+how the initializiaton process proceeds, occurs as early as possible
+after the Lisp engine has been initialized, and in particular, it occurs
+before any devices have been opened, or before any initialization
+parameters are set that could reasonably be expected to be changed.  In
+fact, the pre-initialization process should take care of setting these
+parameters.  The code that implements the pre-initialization process
+should be written in Lisp and should be called from the Lisp function
+@code{normal-top-level}, and the general way that the user customizes
+this process should also be done using Lisp code.
+
+@item
+
+The pre-initialization process involves a number of properties, for
+example the directory containing the user initialization files (normally
+the @code{.xemacs} subdirectory), the name of the user init file, the
+name of the custom init file, where and what type the initial device is,
+whether and when the initial frame is mapped, etc.  A standard interface
+is provided for getting and setting the values of these properties using
+functions such as @code{set-pre-init-property},
+@code{pre-init-property}, etc.  At various points during the
+pre-initialization process, the value of many of these properties can be
+undecided, which means that at the end of the process, the value of
+these properties will be derived from other properties in some fashion
+that is specific to each property.
+
+@item
+
+The default values of these properties are set first from the registry
+under Windows, then from environment variables, then from command line
+switches, such as @code{-q} and @code{-nw}.
+
+@item
+
+One of the command line switches is @code{-pre-init}, whose value is a
+Lisp expression to be evaluated at pre-initialization time, similar to
+the @code{-eval} command line switch.  This allows any
+pre-initialization property to be set from the command line.
+
+@item
+
+Let's define the term @dfn{to determine a pre-initialization property} to
+mean if the value of a property is undetermined, it is computed and set
+according to a rule that is specific to the property.  Then after the
+pre-init properties are initialized from the registry, from the
+environment variables, from command line arguments, two of the pre-init
+properties (specifically the init file directory and the location of the
+@dfn{pre-init file}) are determined.  The purpose of the pre-init file is
+to contain Lisp code that is run at pre-initialization time, and to
+control how the initialization proceeds.  It is a bit similar to the
+standard init file, but the code in the pre-init file shouldn't do
+anything other than set pre-init properties.  Executing any code that
+does I/O might not produce expected results because the only device that
+will exist at the time is probably a stream device connected to the
+standard I/O of the XEmacs process.
+
+@item
+
+After the pre-init file has been run, all of the rest of the pre-init
+properties are determined, and these values are then used to control the
+initialization process.  Some of the rules used in determining specific
+properties are:
+
+@enumerate
+@item
+
+If the @code{.xemacs} sub-directory exists, and it's not obviously a
+package root (which probably means that it contains a file like
+@code{init.el} or @code{pre-init.el}, or if neither of those files is
+present, then it doesn't contain any sub-directories or files that look
+like what would be in a package root), then it becomes the value of the
+init file directory.  Otherwise the user's home directory is used.
+@item
+          
+
+If the init file directory is the user's home directory, then the init
+file is called @code{.emacs}.  Otherwise, it's called @code{init.el}.
+@item
+          
+
+If the init file directory is the user's home directory, then the
+pre-init file is called @code{.xemacs-pre-init.el}.  Otherwise it's
+called @code{pre-init.el}. (One of the reasons for this rule has to do
+with the dialog box that might be displayed at startup.  This will be
+described below.)
+@item
+          
+
+If the init file directory is the user's home directory, then the custom
+init file is called @code{.xemacs-custom-init.el}.  Otherwise, it's
+called @code{custom-init.el}.
+
+@end enumerate
+
+@item
+
+After the first normal device is created, but before any frames are
+created on it, the XEmacs initialization code checks to see if the old
+init file scheme is being used, which is to say that the init file
+directory is the same as the user's home directory.  If that's the case,
+then normally a dialog box comes up (or a question is asked on the
+terminal if XEmacs is being run in a non-windowing mode) which asks if
+the user wants to migrate his initialization files to the new scheme.
+The possible responses are @strong{Yes}, @strong{No}, and @strong{No,
+and don't ask this again}.  If this last response is chosen, then the
+file @code{.xemacs-pre-init.el} in the user's home directory is created
+or appended to with a line of Lisp code that sets up a pre-init property
+indicating that this dialog box shouldn't come up again.  If the
+@strong{Yes} option is chosen, then any package root files in
+@code{.xemacs} are moved into @code{.xemacs/packages}, the file
+@code{.emacs} is moved into @code{.xemacs/init.el} and @code{.emacs} in
+the home directory becomes a symlink to this file.  This way some
+compatibility is still maintained with GNU Emacs and older versions of
+XEmacs.  The code that implements this has to be written very carefully
+to make sure that it doesn't accidentally delete or mess up any of the
+files that get moved around.
+
+@end enumerate
+
+@subheading The custom init file
+
+The @dfn{custom init file} is where the custom package writes its
+options.  This obviously needs to be a separate file from the standard
+init file.  It should also be loaded before the init file rather than
+after, as is usually done currently, so that the init file can override
+these options if it wants to.
+
+@subheading Frame mapping
+
+In addition to the above scheme, the way that XEmacs handles mapping the
+initial frame should be changed.  However, this change perhaps should be
+delayed to a later version of XEmacs because of the user visible changes
+that it entails and the possible breakage in people's init files that
+might occur. (For example, if the rest of the scheme is implemented in
+21.2, then this part of the scheme might want to be delayed until
+version 22.)  The basic idea is that the initial frame is not created
+before the initialization file is run, but instead a banner frame is
+created containing the XEmacs logo, a button that allows the user to
+cancel the execution of the init file and an area where messages that
+are output in the process of running this file are displayed.  This area
+should contain a number of lines, which makes it better than the current
+scheme where only the last message is visible.  After the init file is
+done, the initial frame is mapped.  This way the init file can make face
+changes and other such modifications that affect initial frame and then
+have the initial frame correctly come up with these changes and not see
+any frame dancing or other problems that exist currently.
+
+There should be a function that allows the initialization file to
+explicitly create and map the first frame if it wants to.  There should
+also be a pre-init property that controls whether the banner frame
+appears (of course it defaults to true) a property controlling when the
+initial frame is created (before or after the init file, defaulting to
+after), and a property controlling whether the initial frame is mapped
+(normally true, but will be false if the @code{-unmapped} command line
+argument is given).
+
+If an error occurs in the init file, then the initial frame should
+always be created and mapped at that time so that the error is displayed
+and the debugger has a place to be invoked.
+
+@uref{../../www.666.com/ben/default.htm,Ben Wing}
+
+@node Future Work -- Keyword Parameters, Future Work -- Property Interface Changes, Future Work -- Better Initialization File Scheme, Future Work
+@section Future Work -- Keyword Parameters
+@cindex future work, keyword parameters
+@cindex keyword parameters, future work
+
+NOTE: These changes are partly motivated by the various user-interface
+changes elsewhere in this document, and partly for Mule support.  In
+general the various API's in this document would benefit greatly from
+built-in keywords.
+
+I would like to make keyword parameters an integral part of Elisp.  The
+idea here is that you use the @code{&amp;key} identifier in the
+parameter list of a function and all of the following parameters
+specified are keyword parameters.  This means that when these arguments
+are specified in a function call, they are immediately preceded in the
+argument list by a @dfn{keyword}, which is a symbol beginning with the
+`:' character.  This allows any argument to be specified independently
+of any other argument with no need to place the arguments in any
+particular order.  This is particularly useful for functions that take
+many optional parameters; using keyword parameters makes the code much
+cleaner and easier to understand.
+
+The @code{cl} package already provides keyword parameters of a sort, but
+I would like to make this more integrated and useable in a standard
+fashion.  The interface that I am proposing is essentially compatible
+with the keyword interface in Common Lisp, but it may be a subset of the
+Common Lisp functionality, especially in the first implementation.
+There is one departure from the Common Lisp specification that I would
+like to make in order to make it much easier to add keyword parameters
+to existing functions with optional parameters, and in general, to make
+optional and keyword parameters coexist more easily.  The Common Lisp
+specification indicates that if a function has both optional and keyword
+parameters, the optional parameters are always processed before the
+keyword parameters.  This means, for example, that if a function has
+three required parameters, two optional parameters, and some number of
+keyword parameters following, and the program attempts to call this
+function by passing in the three required arguments, and then some
+keyword arguments, the first keyword specified and the argument
+following it get assigned to the first and second optional parameters as
+specified in the function definition.  This is certainly not what is
+intended, and means that if a function defines both optional and keyword
+parameters, any calls of this function must specify @code{nil} for all
+of the optional arguments before using any keywords.  If the function
+definition is later changed to add more optional parameters, all
+existing calls to this function that use any keyword arguments will
+break.  This problem goes away if we simply process keyword parameters
+before the optional parameters.
+
+The primary changes needed to support the keyword syntax are:
+
+@enumerate
+@item
+
+The subr object type needs to be modified to contain additional slots
+for the number and names of any keyword parameters.
+@item
+      
+
+The implementation of the @code{funcall} function needs to be modified
+so that it knows how to process keyword parameters.  This is the only
+place that will require very much intricate coding, and much of the
+logic that would need to be added can be lifted directly from the
+@code{cl} code.
+@item
+      
+
+A new macro, similar to the @code{DEFUN} macro, and probably called
+@code{DEFUN_WITH_KEYWORDS}, needs to be defined so that built-in Lisp
+primitives containing keywords can be created.  Now, the
+@code{DEFUN_WITH_KEYWORDS} macro should take an additional parameter
+which is a string, which consists of the part of the lambda list
+declaration for this primitive that begins with the @code{&amp;key}
+specifier.  This string is parsed in the @code{DEFSUBR} macro during
+XEmacs initialization, and is converted into the appropriate structure
+that needs to be stored into the subr object.  In addition, the
+@var{max_args} parameter of the @code{DEFUN} macro needs to be
+incremented by the number of keyword parameters and these parameters are
+passed to the C function simply as extra parameters at the end.  The
+@code{DEFSUBR} macro can sort out the actual number of required,
+optional and keyword parameters that the function takes, once it has
+parsed the keyword parameter string.  (An alternative that might make
+the declaration of a primitive a little bit easier to understand would
+involve adding another parameter to the @code{DEFUN_WITH_KEYWORDS} macro
+that specifies the number of keyword parameters.  However, this would
+require some additional complexity in the preprocessor definition of the
+@code{DEFUN_WITH_KEYWORDS} macro, and probably isn't worth
+implementing).
+@item
+      
+
+The byte compiler would have to be modified slightly so that it knows
+about keyword parameters when it parses the parameter declaration of a
+function.  For example, so that it issues the correct warnings
+concerning calls to that function with incorrect arguments.
+@item
+      
+
+The @code{make-docfile} program would have to be modified so that it
+generates the correct parameter lists for primitives defined using the
+@code{DEFUN_WITH_KEYWORDS} macro.
+@item
+      
+
+Possibly other aspects of the help system that deal with function
+descriptions might have to be modified.
+@item
+      
+
+A helper function might need to be defined to make it easier for
+primitives that use both the @code{&amp;rest} and @code{&amp;key}
+specifiers to parse their argument lists.
+
+@end enumerate
+
+@subheading Internal API for C primitives with keywords - necessary for many of the new Mule APIs being defined.
+
+@example
+  DEFUN_WITH_KEYWORDS (Ffoo, "foo", 2, 5, 6, ALLOW_OTHER_KEYWORDS,
+      (ichi, ARG_NIL), (ni, ARG_NIL), (san, ARG_UNBOUND), 0,
+      (arg1, arg2, arg3, arg4, arg5)
+      )
+  @{
+    ...
+  @}
+  
+  -> C fun of 12 args:
+  
+  (arg1, ... arg5, ichi, ..., roku, other keywords)
+  
+  Circled in blue is actual example declaration
+  
+  DEFUN_WITH_KEYWORDS (Ffoo, "foo", 1,2,0 (bar, baz) <- arg list
+  [ MIN ARGS, MAX ARGS, something that could be REST, SPECIFY_DEFAULT or
+  REST_SPEC]
+  
+  [#KEYWORDS [ ALLOW_OTHER, SPECIFY_DEFAULT, ALLOW_OTHER_SPECIFY_DEFAULT
+  6, ALLOW_OTHER_SPECIFY_DEFAULT,
+  
+  (ichi, 0) (ni, 0), (san, DEFAULT_UNBOUND), (shi, "t"), (go, "5"),
+  (roku, "(current-buffer)")
+  <- specifies arguments, default values (string to be read into Lisp
+     data during init; then forms evalled at fn ref time.
+  
+  ,0 <- [INTERACTIVE SPEC] )
+  
+  LO = Lisp_Object
+  
+  -> LO Ffoo (LO bar, LO baz, LO ichi, LO ni, LO san, LO shi, LO go,
+              LO roku, int numkeywords, LO *other_keywords)
+  
+  #define DEFUN_WITH_KEYWORDS (fun, funstr, minargs, maxargs, argspec, \
+           #args, num_keywords, keywordspec, keywords, intspec) \
+  LO fun (DWK_ARGS (maxargs, args) \
+          DWK_KEYWORDS (num_keywords, keywordspec, keywords))
+  
+  #define DWK_KEYWORDS (num_keywords, keywordspec, keywords) \
+          DWK_KEYWORDS ## keywordspec (keywords)
+          DWK_OTHER_KEYWORDS ## keywordspec)
+  
+  #define DWK_KEYWORDS_ALLOW_OTHER (x,y)
+          DWK_KEYWORDS (x,y)
+  
+  #define DWK_KEYWORDS_ALLOW_OTHER_SPECIFICATIONS (x,y)
+          DWK_KEYWORDS_SPECIFY_DEFAULT (x,y)
+  
+  #define DWK_KEYWORDS_SPECIFY_DEFAULT (numkey, key)
+          ARGLIST_CAR ## numkey key
+  
+  #define ARGLT_GRZ (x,y) LO CAR x, LO CAR y
+@end example
+
+@node Future Work -- Property Interface Changes, Future Work -- Toolbars, Future Work -- Keyword Parameters, Future Work
+@section Future Work -- Property Interface Changes
+@cindex future work, property interface changes
+@cindex property interface changes, future work
+
+In my past work on XEmacs, I already expanded the standard property
+functions of @code{get}, @code{put}, and @code{remprop} to work on
+objects other than symbols and defined an additional function
+@code{object-plist} for this interface.  I'd like to expand this
+interface further and advertise it as the standard way to make property
+changes in objects, especially the new objects that are going to be
+defined in order to support the added user interface features of version
+22.  My proposed changes are as follows:
+
+@enumerate
+@item
+
+A new concept associated with each property called a @dfn{default value}
+is introduced.  (This concept already exists, but not in a well-defined
+way.) The default value is the value that the property assumes for
+certain value retrieval functions such as @code{get} when it is
+@dfn{unbound}, which is to say that its value has not been explicitly
+specified. Note: the way to make a property unbound is to call
+@code{remprop}.  Note also that for some built-in properties, setting
+the property to its default value is equivalent to making it unbound.
+@item
+      
+
+The behavior of the @code{get} function is modified.  If the @code{get}
+function is called on a property that is unbound and the third, optional
+@var{default} argument is @code{nil}, then the default value of the
+property is returned.  If the @var{default} argument is not @code{nil},
+then whatever was specified as the value of this argument is returned.
+For the most part, this is upwardly compatible with the existing
+definition of @code{get} because all user-defined properties have an
+initial default value of @code{nil}.  Code that calls the @code{get}
+function and specifies @code{nil} for the @var{default} argument, and
+expects to get @code{nil} returned if the property is unbound, is almost
+certainly wrong anyway.
+@item
+      
+
+A new function, @code{get1} is defined.  This function does not take a
+default argument like the @code{get} function.  Instead, if the property
+is unbound, an error is signaled.  Note: @code{get} can be implemented
+in terms of @code{get1}.
+@item
+      
+
+New functions @code{property-default-value} and @code{property-bound-p}
+are defined with the obvious semantics.
+@item
+      
+
+An additional function @code{property-built-in-p} is defined which takes
+two arguments, the first one being a symbol naming an object type, and
+the second one specifying a property, and indicates whether the property
+name has a built-in meaning for objects of that type.
+@item
+      
+
+It is not necessary, or even desirable, for all object types to allow
+user-defined properties.  It is always possible to simulate user-defined
+properties for an object by using a weak hash table.  Therefore, whether
+an object allows a user to define properties or not should depend on the
+meaning of the object.  If an object does not allow user-defined
+properties, the @code{put} function should signal an error, such as
+@code{undefined-property}, when given any property other than those that
+are predefined.
+@item
+      
+
+A function called @code{user-defined-properties-allowed-p} should be
+defined with the obvious semantics.  (See the previous item.)
+@item
+      
+
+Three more functions should be defined, called
+@code{built-in-property-name-list}, @code{property-name-list}, and
+@code{user-defined-property-name-list}.
+
+@end enumerate
+
+Another idea:
+
+@example
+(define-property-method
+  predicate object-type
+  predicate cons :(KEYWORD)  (all lists beginning with KEYWORD)
+
+  :put putfun
+  :get
+  :remprop
+  :object-props
+  :clear-properties
+  :map-properties
+
+  e.g. (define-property-method 'hash-table
+         :put #'(lambda (obj key value) (puthash key obj value)))
+@end example
+
+
+@node Future Work -- Toolbars, Future Work -- Menu API Changes, Future Work -- Property Interface Changes, Future Work
+@section Future Work -- Toolbars
+@cindex future work, toolbars
+@cindex toolbars
+
+@menu
+* Future Work -- Easier Toolbar Customization::  
+* Future Work -- Toolbar Interface Changes::  
+@end menu
+
+@node Future Work -- Easier Toolbar Customization, Future Work -- Toolbar Interface Changes, Future Work -- Toolbars, Future Work -- Toolbars
+@subsection Future Work -- Easier Toolbar Customization
+@cindex future work, easier toolbar customization
+@cindex easier toolbar customization, future work
+
+@strong{Abstract:} One of XEmacs' greatest strengths is its ability to
+be customized endlessly.  Unfortunately, it is often too difficult to
+figure out how to do this.  There has been some recent work like the
+Custom package, which helps in this regard, but I think there's a lot
+more work that needs to be done.  Here are some ideas (which certainly
+could use some more thought).
+
+Although there is currently an @code{edit-toolbar} package, it is not
+well integrated with XEmacs, and in general it is much too hard to
+customize the way toolbars look.  I would like to see an interface that
+works a bit like the way things work under Windows, where you can
+right-click on a toolbar to get a menu of options that allows you to
+change aspects of the toolbar.  The general idea is that if you
+right-click on an item itself, you can do things to that item, whereas
+if you right-click on a blank part of a toolbar, you can change the
+properties of the toolbar.  Some of the items on the right-click menu
+for a particular toolbar button should be specified by the button
+itself.  Others should be standard.  For example, there should be an
+@strong{Execute} item which simply does what would happen if you
+left-click on a toolbar button.  There should probably be a
+@strong{Delete} item to get rid of the toolbar button and a
+@strong{Properties} item, which brings up a property sheet that allows
+you to do things like change the icon and the command string that's
+associated with the toolbar button.
+
+The options to change the appearance of the toolbar itself should
+probably appear both on the context menu for specific buttons, and on
+the menu that appears when you click on a blank part of the toolbar.
+That way, if there isn't a blank part of the toolbar, you can still
+change the toolbar appearance.  As for what appears in these items, in
+Outlook Express, for example, there are three different menu items, one
+of which is called @strong{Buttons}, which brings up, or pops up a
+window which allows you to edit the toolbar, which for us could pop up a
+new frame, which is running @code{edit-toolbar.el}.  The second item is
+called @strong{Align}, which contains a submenu that says @strong{Top},
+@strong{Bottom}, @strong{Left}, and @strong{Right}, which will be just
+like setting the default toolbar position.  The third one says
+@strong{Text Labels}, which would just let you select whether there are
+captions or not.  I think all three of these are useful and are easy to
+implement in XEmacs.  These things also need to be integrated with
+custom so that a user can control whether these options apply to all
+sessions, and in such a case can save the settings out to an options
+file.  @code{edit-toolbar.el} in particular needs to integrate with
+custom.  Currently it has some sort of hokey stuff of its own, which it
+saves out to a @code{.toolbar} file.  Another useful option to have,
+once we draw the captions dynamically rather than using pre-generated
+ones, would be the ability to change the font size of the captions.  I'm
+sure that Kyle, for one, would appreciate this.
+
+(This is incomplete.....)
+
+@uref{../../www.666.com/ben/default.htm,Ben Wing}
+
+@node Future Work -- Toolbar Interface Changes,  , Future Work -- Easier Toolbar Customization, Future Work -- Toolbars
+@subsection Future Work -- Toolbar Interface Changes
+@cindex future work, toolbar interface changes
+@cindex toolbar interface changes, future work
+
+I propose changing the way that toolbars are specified to make them more
+flexible.
+
+@enumerate
+@item
+
+A new format for the vector that specifies a toolbar item is allowed.
+In this format, the first three items of the vector are required and
+are, respectively, a caption, a glyph list, and a callback.  The glyph
+list and callback arguments are the same as in the current toolbar item
+specification, and the caption is a string specifying the caption text
+placed below the toolbar glyph.  The caption text is required so that
+toolbar items can be identified for the purpose of retrieving and
+changing their property values.  Putting the caption first also makes it
+easy to distinguish between the new and the old toolbar item vector
+formats.  In the old format, the first item, the glyph list, is either a
+list or a symbol.  In the new format, the first item is a string.  In
+the new format, following the three required items, are optional keyword
+items specified using keywords in the same format as the menu item
+vector format.  The keywords that should be predefined are:
+@code{:help-echo}, @code{:context-menu}, @code{:drop-handlers}, and
+@code{:enabled-p}.  The @code{:enabled-p} and @code{:help-echo} keyword
+arguments are the same as the third and fourth items in the old toolbar
+item vector format.  The @code{:context-menu} keyword is a list in
+standard menu format that specifies additional items that will appear
+when the context menu for the toolbar item is popped up.  (Typically,
+this happens when the right mouse button is clicked on the toolbar
+item).  The @code{:drop-handlers} keyword is for use by the new
+drag-n-drop interface (see @uref{drag-n-drop.html,Drag-n-Drop Interface
+Changes} ), and is not normally specified or modified directly.
+@item
+      
+
+Conceivably, there could also be keywords that are associated with a
+toolbar itself, rather than with a particular toolbar item.  These
+keyword properties would be specified using keywords and arguments that
+occur before any toolbar item vectors, similarly to how things are done
+in menu specifications.  Possible properties could include
+@code{:captioned-p} (whether the captions are visible under the
+toolbar), @code{:glyphs-visible-p} (whether the toolbar glyphs are
+visible), and @code{:context-menu} (additional items that will appear on
+the context menus for all toolbar items and additionally will appear on
+the context menu that is popped up when the right mouse button is
+clicked over a portion of the toolbar that does not have any toolbar
+buttons in it).  The current standard practice with regards to such
+properties seems to be to have separate specifiers, such as
+@code{left-toolbar-width}, @code{right-toolbar-width},
+@code{left-toolbar-visible-p}, @code{right-toolbar-visible-p}, etc.  It
+could easily be argued that there should be no such toolbar specifiers
+and that all such properties should be part of the toolbar instantiator
+itself.  In this scheme, the only separate specifiers that would exist
+for individual properties would be default values.  There are a lot of
+reasons why an interface change like this makes sense.  For example,
+currently when VM sets its toolbar, it also sets the toolbar width and
+similar properties.  If you change which edge of the frame the VM
+toolbar occurs in, VM will also have to go and modify all of the
+position-specific toolbar specifiers for all of the other properties
+associated with a toolbar.  It doesn't really seem to make sense to me
+for the user to be specifying the width and visibility and such of
+specific toolbars that are attached to specific edges because the user
+should be free to move the toolbars around and expect that all of the
+toolbar properties automatically move with the toolbar. (It is also easy
+to imagine, for example, that a toolbar might not be attached to the
+edge of the frame at all, but might be floating somewhere on the user's
+screen).  With an interface where these properties are separate
+specifiers, this has to be done manually.  Currently, having the various
+toolbar properties be inside of toolbar instantiators makes them
+difficult to modify, but this will be different with the API that I
+propose below.
+@item
+      
+
+I propose an API for modifying toolbar and toolbar item properties, as
+well as making other changes to toolbar instantiators, such as inserting
+or deleting toolbar items.  This API is based around the concept of a
+path.  There are two kinds of paths here -- @dfn{toolbar paths} and
+@dfn{toolbar item paths}.  Each kind of path is an object (of type
+@code{toolbar-path} and @code{toolbar-item-path}, respectively) whose
+properties specify the location in a toolbar instantiator where changes
+to the instantiator can be made.  A toolbar path, for example, would be
+created using the @code{make-toolbar-path} function, which takes a
+toolbar specifier (or optionally, a symbol, such as @code{left},
+@code{right}, @code{default}, or @code{nil}, which refers to a
+particular toolbar), and optionally, parameters such as the locale and
+the tag set, which specify which actual instantiator inside of the
+toolbar specifier is to be modified.  A toolbar item path is created
+similarly using a function called @code{make-toolbar-item-path}, which
+takes a toolbar specifier and a string naming the caption of the toolbar
+item to be modified, as well as, of course, optionally the locale and
+tag set parameters and such.
+
+The usefulness of these path objects is as arguments to functions that
+will use them as pointers to the place in a toolbar instantiator where
+the modification should be made.  Recall, for example, the generalized
+property interface described above.  If a function such as @code{get} or
+@code{put} is called on a toolbar path or toolbar item path, it will use
+the information contained in the path object to retrieve or modify a
+property located at the end of the path.  The toolbar path objects can
+also be passed to new functions that I propose defining, such as
+@code{add-toolbar-item}, @code{delete-toolbar-item}, and
+@code{find-toolbar-item}.  These functions should be parallel to the
+functions for inserting, deleting, finding, etc. items in a menu.  The
+toolbar item path objects can also be passed to the drop-handler
+functions defined in @uref{drag-n-drop.html,Drag-n-Drop Interface
+Changes} to retrieve or modify the drop handlers that are associated
+with a toolbar item.  (The idea here is that you can drag an object and
+drop it onto a toolbar item, just as you could onto a buffer, an extent,
+a menu item, or any other graphical element).
+@item
+      
+
+We should at least think about allowing for separate default and
+buffer-local toolbars.  The user should either be able to position these
+toolbars one above the other, or side by side, occupying a single
+toolbar line.  In the latter case, the boundary between the toolbars
+should be draggable, and if a toolbar takes up more room than is
+allocated for it, there should be arrows that appear on one or both
+sides of the toolbar so that the items in the toolbar can be scrolled
+left or right.  (For that matter, this sort of interface should exist
+even when there is only one toolbar that is on a particular toolbar
+line, because the toolbar may very well have more items than can be
+displayed at once, and it's silly in such a case if it's impossible to
+access the items that are not currently visible).
+@item
+      
+
+The default context menu for toolbars (which should be specified using a
+specifier called @code{default-toolbar-context-menu} according to the
+rules defined above) should contain entries allowing the user to modify
+the appearance of a toolbar.  Entries would include, for example,
+whether the toolbar is captioned, whether the glyphs for the toolbar are
+visible (if the toolbar is captioned but its glyphs are not visible, the
+toolbar appears as nothing but text; you can set things up this way, for
+example, in Netscape), an option that brings up a package for editing
+the contents of a toolbar, an option to allow the caption face to be
+dchanged (perhaps thorough jan @code{edit-faces} or @code{custom}
+interface), etc.
+
+@end enumerate
+
+@node Future Work -- Menu API Changes, Future Work -- Removal of Misc-User Event Type, Future Work -- Toolbars, Future Work
+@section Future Work -- Menu API Changes
+@cindex future work, menu API changes
+@cindex menu API changes, future work
+
+
+@enumerate
+@item
+
+I propose making a specifier for the menubar associated with the frame.
+The specifier should be called @code{default-menubar} and should replace
+the existing @code{current-menubar} variable.  This would increase the
+power of the menubar interface and bring it in line with the toolbar
+interface.  (In order to provide proper backward compatibility, we might
+have to @uref{symbol-value-handlers.html,complete the symbol value
+handler mechanism})
+@item
+      
+
+I propose an API for modifying menu instantiators similar to the API
+composed above for toolbar instantiators.  A new object called a
+@dfn{menu path} (of type @code{menu-path}) can be created using the
+@code{make-menu-path} function, and specifies a location in a particular
+menu instantiator where changes can be made.  The first argument to
+@code{make-menu-path} specifies which menu to modify and can be a
+specifier, a value such as @code{nil} (which means to modify the default
+menubar associated with the selected frame), or perhaps some other kind
+of specification referring to some other menu, such as the context menus
+invoked by the right mouse button.  The second argument to
+@code{make-menu-path}, also required, is a list of zero or more strings
+that specifies the particular menu or menu item in the instantiator that
+is being referred to.  The remaining arguments are optional and would be
+a locale, a tag set, etc.  The menu path object can be passed to
+@code{get}, @code{put} or other standard property functions to access or
+modify particular properties of a menu or a menu item.  It can also be
+passed to expanded versions of the existing functions such as
+@code{find-menu-item}, @code{delete-menu-item}, @code{add-menu-button},
+etc.  (It is really a shame that @code{add-menu-item} is an obsolete
+function because it is a much better name than @code{add-menu-button}).
+Finally, the menu path object can be passed to the drop-handler
+functions described in @uref{drag-n-drop.html,Drag-n-Drop Interface
+Changes} to access or modify the drop handlers that are associated with
+a particular menu item.
+@item
+      
+
+New keyword properties should be added to the menu item vector.  These
+include @code{:help-echo}, @code{:context-menu} and
+@code{:drop-handlers}, with similar semantics to the corresponding
+keywords for toolbar items.  (It may seem a bit strange at first to have
+a context menu associated with a particular menu item, but it is a user
+interface concept that exists both in Open Look and in Windows, and
+really makes a lot of sense if you give it a bit of thought).  These
+properties may not actually be implemented at first, but at least the
+keywords for them should be defined.
+
+@end enumerate
+
+@uref{../../www.666.com/ben/default.htm,Ben Wing}
+
+@node Future Work -- Removal of Misc-User Event Type, Future Work -- Mouse Pointer, Future Work -- Menu API Changes, Future Work
+@section Future Work -- Removal of Misc-User Event Type
+@cindex future work, removal of misc-user event type
+@cindex removal of misc-user event type, future work
+
+@strong{Abstract:} This page describes why the misc-user event type
+should be split up into a number of different event types, and how to do
+this.
+
+The misc-user event should not exist as a single event type.  It should
+be split up into a number of different event types: one for scrollbar
+events, one for menu events, and one or two for drag-n-drop events.
+Possibly there will be other event types created in the future.  The
+reason for this is that the misc-user event was a bad design choice when
+I made it, and it has only gotten worse with Oliver's attempts to add
+features to it to make it be used for drag-n-drop.  I know that there
+was originally a separate drag-n-drop event type, and it was folded into
+the misc-user event type on my recommendation, but I have now realized
+the error of my ways.  I had originally created a single event type in
+an attempt to prevent some Lisp programs from breaking because they
+might have a case statement over various event types, and would not be
+able to handle new event types appearing.  I think now that these
+programs simply need to be written in a way to handle new event types
+appearing.  It's not very hard to do this.  You just use predicates
+instead of doing a case statement over the event type.  If we preserve
+the existing predicate called @code{misc-user-event-p}, and just make
+sure that it evaluates to true when given any user event type other than
+the standard simple ones, then most existing code will not break either
+when we split the event types up like this, or if we add any new event
+types in the future.
+
+More specifically, the only clean way to design the misc-user event type
+would be to add a sub-type field to it, and then have the nature of all
+the other fields in the event type be dependent on this sub-type.  But
+then in essence, we'd just be reimplementing the whole event-type scheme
+inside of misc-user events, which would be rather pointless.
+
+@node Future Work -- Mouse Pointer, Future Work -- Extents, Future Work -- Removal of Misc-User Event Type, Future Work
+@section Future Work -- Mouse Pointer
+@cindex future work, mouse pointer
+@cindex mouse pointer, future work
+
+@menu
+* Future Work -- Abstracted Mouse Pointer Interface::  
+* Future Work -- Busy Pointer::  
+@end menu
+
+@node Future Work -- Abstracted Mouse Pointer Interface, Future Work -- Busy Pointer, Future Work -- Mouse Pointer, Future Work -- Mouse Pointer
+@subsection Future Work -- Abstracted Mouse Pointer Interface
+@cindex future work, abstracted mouse pointer interface
+@cindex abstracted mouse pointer interface, future work
+
+@strong{Abstract:} We need to create a new image format that allows
+standard pointer shapes to be specified in a way that works on all
+Windows systems.  I suggest that this be called @code{pointer}, which
+has one tag associated with it, named @code{:data}, and whose value is a
+string.  The possible strings that can be specified here are predefined
+by XEmacs, and are guaranteed to work across all Windows systems.  This
+means that we may need to provide our own definition for pointer shapes
+that are not standard on some systems.  In particular, there are a lot
+more standard pointer shapes under X than under Windows, and most of
+these pointer shapes are fairly useful.  There are also a few pointer
+shapes (I think the hand, for example) on Windows, but not on X.
+Converting the X pointer shapes to Windows should be easy because the
+definitions of the pointer shapes are simply XBM files, which we can
+read under Windows.  Going the other way might be a little bit more
+difficult, but it should still not be that hard.
+
+While we're at it, we should change the image format currently called
+@code{cursor-font} to @code{x-cursor-font}, because it only works under
+X Windows.  We also need to change the format called @code{resource} to
+be @code{mswindows-resource}.  At least in the case of
+@code{cursor-font}, the old value should be maintained for compatibility
+as an obsolete alias.  The @code{resource} format was added so recently
+that it's possible that we can just change it.
+
+@uref{../../www.666.com/ben/default.htm,Ben Wing}
+
+@node Future Work -- Busy Pointer,  , Future Work -- Abstracted Mouse Pointer Interface, Future Work -- Mouse Pointer
+@subsection Future Work -- Busy Pointer
+@cindex future work, busy pointer
+@cindex busy pointer, future work
+
+Automatically make the mouse pointer switch to a busy shape (watch
+signal) when XEmacs has been "busy" for more than, e.g. 2 seconds.
+Define the @dfn{busy time} as the time since the last time that XEmacs was
+ready to receive input from the user.  An implementation might be:
+
+@enumerate
+@item
+Set up an asynchronous timeout, to signal after the busy time; these
+are triggered through a call to QUIT so they will be triggered even
+when the code is busy doing something.
+@item
+We already have an "emacs_is_blocking" flag when we are waiting for
+input.  In the same place, when we are about to block and wait for
+input (regardless of whether input is already present), maybe call a
+hook, which in this case would remove the timer and put back the
+normal mouse shape.  Then when we exit the blocking stage (we got
+some input), call another hook, which in this case will start the
+timer.  Note that we don't want these "blocking" hooks to be triggered
+just because of an accept-process-output or some similar thing that
+retrieves events, only to put them back onto a queue for later
+processing.  Maybe we want some sort of flag that's bound by those
+routines saying that we aren't really waiting for input.  Making
+that flag Lisp-accessible allows it to be set by similar sorts of
+Lisp routines (if there are any?) that loop retrieving events but
+defer them, or only drain the queue, or whatnot.  #### Think about
+whether it would make some sense to try and be more clever in our
+determinations of what counts as "real waiting for user input", e.g.
+whether the event gets dispatched (unfortunately this occurs way too
+late, we want to know to remove the busy cursor @strong{before} getting an
+event), maybe whether there are any events waiting to be processed or
+we'll truly block, etc. (e.g. one possibility if there is input on
+the queue already when we "block" for input, don't remove the busy-
+wait pointer, but trigger the removal of it when we dispatch a user
+event).
+@end enumerate
+
+@node Future Work -- Extents, Future Work -- Version Number and Development Tree Organization, Future Work -- Mouse Pointer, Future Work
+@section Future Work -- Extents
+@cindex future work, extents
+@cindex extents, future work
+
+@menu
+* Future Work -- Everything should obey duplicable extents::  
+@end menu
+
+@node Future Work -- Everything should obey duplicable extents,  , Future Work -- Extents, Future Work -- Extents
+@subsection Future Work -- Everything should obey duplicable extents
+@cindex future work, everything should obey duplicable extents
+@cindex everything should obey duplicable extents, future work
+
+A lot of functions don't properly track duplicable extents.  For
+example, the @code{concat} function does, but the @code{format} function
+does not, and extents in keymap prompts are not displayed either.  All
+of the functions that generate strings or string-like entities should
+track the extents that are associated with the strings.  Currently this
+is difficult because there is no general mechanism implemented for doing
+this.  I propose such a general mechanism, which would not be hard to
+implement, and would be easy to use in other functions that build up
+strings.
+
+The basic idea is that we create a C structure that is analogous to a
+Lisp string in that it contains string data and lists of extents for
+that data.  Unlike standard Lisp strings, however, this structure (let's
+call it @code{lisp_string_struct}) can be incrementally updated and its
+allocation is handled explicitly so that no garbage is generated.  (This
+is important for example, in the event-handling code which would want to
+use this structure, but needs to not generate any garbage for efficiency
+reasons).  Both the string data and the list of extents in this string
+are handled using dynarrs so that it is easy to incrementally update
+this structure.  Functions should exist to create and destroy instances
+of @code{lisp_string_struct} to generate a Lisp string from a
+@code{lisp_string_struct} and vice-versa to append a sub-string of a
+Lisp string to a @code{lisp_string_struct}, to just append characters to
+a @code{lisp_string_struct}, etc.  The only thing possibly tricky about
+implementing these functions is implementing the copying of extents from
+a Lisp string into a @code{lisp_string_struct}.  However, there is
+already a function @code{copy_string_extents()} that does basically this
+exact thing, and it should be easy to create a modified version of this
+function.
+
+@uref{../../www.666.com/ben/default.htm,Ben Wing}
+
+@node Future Work -- Version Number and Development Tree Organization, Future Work -- Improvements to the @code{xemacs.org} Website, Future Work -- Extents, Future Work
+@section Future Work -- Version Number and Development Tree Organization
+@cindex future work, version number and development tree organization
+@cindex version number and development tree organization, future work
+
+@strong{Abstract:} The purpose of this proposal is to present a coherent
+plan for how development branches in XEmacs are managed.  This will
+cover such issues as stable versus experimental branches, creating new
+branches, synchronizing patches between branches, and how version
+numbers are assigned to branches.
+
+A development branch is defined to be a linear series of releases of the
+XEmacs code base, each of which is derived from the previous one.  When
+the XEmacs development tree is forked and two branches are created where
+there used to be one, the branch that is intended to be more stable and
+have fewer changes made to it is considered the one that inherits the
+parent branch, and the other branch is considered to have begun at the
+branching point.  The less stable of the two branches will eventually be
+forked again, while this will not happen usually to the more stable of
+the two branches, and its development will eventually come to an end.
+This means that every branch has a definite ending point.  For example,
+the 20.x branch began at the point when the released
+19.13 code tree was split into a 19.x and a 20.x branch, and a 20.x
+branch will end when the last 20.x release (probably numbered 20.5 or
+20.6) is released.
+
+I think that there should always be three active development branches at
+any time.  These branches can be designated the stable, the semi-stable,
+and the experimental branches.  This situation has existed in the
+current code tree as soon as the 21.0 development branch was split.  In
+this situation, the stable branch is the 20.x series.  The semi-stable
+branch is the 21.0 release and the stability releases that follow.  The
+experimental branch is the branch that was created as the result of the
+21.0 development branch split.  Typically, the stable branch has been
+released for a long period of time.  The semi-stable branch has been
+released for a short period of time, or is about to be released, and the
+experimental branch has not yet been released, and will probably not be
+released for awhile.  The conditions that should hold in all
+circumstances are:
+
+@enumerate
+@item
+
+There should be three active branches.
+@item
+
+The experimental branch should never be in feature freeze.
+
+@end enumerate
+
+The reason for the second condition is to ensure that active development
+can always proceed and is never throttled, as is happening currently at
+the end of the 21.0 release cycle.  What this means is that as soon as
+the experimental branch is deemed to be stable enough to go into feature
+freeze:
+
+@enumerate
+@item
+
+The current stable branch is made inactive and all further development
+on it ceases.
+@item
+
+The semi-stable branch, which by now should have been released for a
+fair amount of time, and should be fairly stable, gets renamed to the
+stable branch.
+@item
+
+The experimental branch is forked into two branches, one of which
+becomes the semi-stable branch, and the other, the experimental branch.
+
+@end enumerate
+
+The stable branch is always in high resistance, which is to say that the
+only changes that can be made to the code are important bug fixes
+involving a small amount of code where it should be clear just by
+reading the code that no destabilizing code has been introduced.  The
+semi-stable branch is in low resistance, which means that no major
+features can be added, but except right before a release fairly major
+code changes are allowed.  Features can be added if they are
+sufficiently small, if they are deemed sufficiently critical due to
+severe problems that would exist if the features were not added (for
+example, replacement of the unexec mechanism with a portable solution
+would be a feature that could be added to the semi-stable branch
+provided that it did not involve an overly radical code re-architecture,
+because otherwise it might be impossible to build XEmacs on some
+architectures or with some compilers), or if the primary purpose of the
+new feature is to remedy an incompleteness in a recent architectural
+change that was not finished in a prior release due to lack of time (for
+example, abstracting the mouse pointer and list-of-colors interfaces,
+which were left out of 21.0).  There is no feature resistance in place
+in the experimental branch, which allows full development to proceed at
+all times.
+
+In general, both the stable and semi-stable branches will contain
+previous net releases.  In addition, there will be beta releases in all
+three branches, and possibly development snapshots between the beta
+releases.  It's obviously necessary to have a good version numbering
+scheme in order to keep everything straight.
+
+First of all, it needs to be immediately clear from the version number
+whether the release is a beta release or a net release.  Steve has
+proposed getting rid of the beta version numbering system, which I think
+would be a big mistake.  Furthermore, the net release version number and
+beta release version number should be kept separate, just as they are
+now, to make it completely clear where any particular release stands.
+There may be alternate ways of phrasing a beta release other than
+something like 21.0 beta 34, but in all such systems, the beta number
+needs to be zero for any release version.  Three possible alternative
+systems, none of which I like very much, are:
+
+@enumerate
+@item
+
+The beta number is simply an extra number in the regular version number.
+Then, for example, 21.0 beta 34 becomes 21.0.34.  The problem is that
+the release version, which would simply be called 21.0, appears to be
+earlier than 21.0 beta 34.
+@item
+
+The beta releases appear as later revisions of earlier releases.  Then,
+for example, 21.1 beta 34 becomes 21.0.34, and 21.0 beta 34 would have
+to become 21.-1.34.  This has both the obvious ugliness of negative
+version numbers and the problem that it makes beta releases appear to be
+associated with their previous releases, when in fact they are more
+closely associated with the following release.
+@item
+
+Simply make the beta version number be negative.  In this scheme, you'd
+start with something like -1000 as the first beta, and then 21.0 beta 34
+would get renumbered to 21.0.-968.  Obviously, this is a crazy and
+convoluted scheme as well, and we would be best to avoid it.
+
+@end enumerate
+
+Currently, the between-beta snapshots are not numbered, but I think that
+they probably should be.  If appropriate scripts are handled to automate
+beta release, it should be very easy to have a version number
+automatically updated whenever a snapshot is made.  The number could be
+added either as a separate snapshot number, and you'd have 21.0 beta 34
+pre 1, which becomes before 21.0 beta 34; or we could make the beta
+number be floating point, and then the same snapshot would have to be
+called 21.0 beta 33.1.  The latter solution seems quite kludgey to me.
+
+There also needs to be a clear way to distinguish, when a net release is
+made, which branch the release is a part of.  Again, three solutions
+come to mind:
+
+@enumerate
+@item
+
+The major version number reflects which development branch the release
+is in and the minor version number indicates how many releases have been
+made along this branch.  In this scheme, 21.0 is always the first
+release of the 21 series development branch, and when this branch is
+split, the child branch that becomes the experimental branch gets
+version numbers starting with 22.  This scheme is the simplest, and it's
+the one I like best.
+@item
+
+We move to a three-part version number.  In this scheme, the first two
+numbers indicate the branch, and the third number indicates the release
+along the branch.  In this scheme, we have numbers like 21.0.1, which
+would be the second release in the 21.0 series branch, and 21.1.2, which
+would be the third release in the
+21.1 series branch.  The major version number then gets increased
+only very occasionally, and only when a sufficiently major architectural
+change has been made, particularly one that causes compatibility
+problems with code written for previous branches.  I think schemes like
+this are unnecessary in most circumstances, because usually either the
+major version number ends up changing so often that the second number is
+always either zero or one, or the major version number never changes,
+and as such becomes useless.  By the time the major version number would
+change, the product itself has changed so much that it often gets
+renamed.  Furthermore, it is clear that the two version number scheme
+has been used throughout most of the history of Emacs, and recently we
+have been following the two number scheme also.  If we introduced a
+third revision number, at this point it would both confuse existing code
+that assumed there were two numbers, and would look rather silly given
+that the major version number is so high and would probably remain at
+the same place for quite a long time.
+@item
+
+A third scheme that would attempt to cross the two schemes would keep
+the same concept of major version number as for the three number scheme,
+and would compress the second and third numbers of the three number
+scheme into one number by using increments of ten.  For example, the
+current 21.x branch would have releases No. 21.0, 21.1, etc.  The next
+branch would be No. 21.10, 21.11, etc.  I don't like this scheme very
+much because it seems rather kludgey, and also because it is not used in
+any other product as far as I know.
+@item
+
+Another scheme that would combine the second and third numbers in the
+three number scheme would be to have the releases in the current 21.x
+series be numbered 21.0, then 21.01, then 22.02, etc.  The next series
+is 21.1, then 21.11, then 21.12, etc.  This is similar to the way that
+version numbers are done for DOS in Windows.  I also think that this
+scheme is fairly silly because, like the previous scheme, its only
+purpose is to avoid increasing the major version number very much.  But
+given that we have already have a fairly large major version number,
+there doesn't seem to be any particular problem with increasing this
+number by one every year or two.  Some people will object that by doing
+this, it becomes impossible to tell when a change is so major that it
+causes a lot of code breakage, but past releases have not been accurate
+indicators of this.  For example,
+19.12 caused a lot of code breakage, but 20.0 caused less, and 21.0
+caused less still.  In the GNU Emacs world, there were byte code changes
+made between 19.28 and 19.29, but as far as I know, not between 19.29
+and 20.0.
+
+@end enumerate
+
+With three active development branches, synchronizing code changes
+between the branches is obviously somewhat of a problem.  To make things
+easier, I propose a few general guidelines:
+
+@enumerate
+@item
+
+Merging between different branches need not happen that often.  It
+should not happen more often than necessary to avoid undue burden on the
+maintainer, but needs to be done at all defined checkpoints.  These
+checkpoints need to be noted in all of the places that track changes
+along the branch, for example, in all of the change logs and in all of
+the CVS tags.
+@item
+
+Every code change that can be considered a self-contained unit, no
+matter how large or small, needs to have a change log entry, preferably
+a single change log entry associated with it.  This is an absolute
+requirement.  There should be no code changes without an associated
+change log entry.  Otherwise, it is highly likely that patches will not
+be correctly synchronized across all versions, and will get lost.  There
+is no need for change log entries to contain unnecessary detail though,
+and it is important that there be no more change log entries than
+necessary, which means that two or more change log entries associated
+with a single patch need to be grouped together if possible.  This might
+imply that there should be one global change log instead of change logs
+in each directory, or at the very least, the number of separate change
+logs should be kept to a minimum.
+@item
+
+The patch that is associated with each change log entry needs to be kept
+around somewhere.  The reason for this is that when synchronizing code
+from some branch to some earlier branch, it is necessary to go through
+each change log entry and decide whether a change is worthy to make it
+into a more stable branch.  If so, the patch associated with this change
+needs to be individually applied to the earlier branch.
+@item
+
+All changes made in more stable branches get merged into less stable
+branches unless the change really is completely unnecessary in the less
+stable branch because it is superseded by some other change.  This will
+probably mean more developers making changes to the semi-stable branch
+than to the experimental branch.  This means that developers should
+strive to do their development in the most stable branch that they
+expect their code to go into.  An alternative to this which is perhaps
+more workable is simply to insist that all developers make all patches
+based off of the experimental branch, and then later merge these patches
+down to the more stable branches as necessary.  This means, however,
+that submitted patches should never be combinations of two or more
+unrelated changes.  Whenever such patches are submitted, they should
+either be rejected (which should apply to anybody who should know
+better, which probably means everybody on the beta list and anybody else
+who is a regular contributor), or the maintainer or some other
+designated party needs to filter the combined patch into separate
+patches, one per logical change.
+@item
+
+The maintainer should keep all the patches around in some data base, and
+the patches should be given an identifier consisting of the author of
+the patch, the date the patch was submitted, and some other identifying
+characteristic, such as a number, in case there is more than one patch
+on the same date by the same author.  The database should hopefully be
+correctly marked at all times with something indicating which branches
+the patch has been applied to, and this database should hopefully be
+publicly visible so that patch authors can determine whether their
+patches have been applied, and whether their patches have been received,
+so that patches do not get needlessly resubmitted.
+@item
+
+Global automatable changes such as textual renaming, reordering, and
+additions or deletions of parameters in function calls should still be
+allowed, even with multiple development branches.  (Sometimes these are
+necessary for code cleanliness, and in the long run, they save a lot of
+time, even through they may cause some headaches in the short-term.)  In
+general, when such changes are made, they should occur in a separate
+beta version that contains only such changes and no other patches, and
+the changes should be made in both the semi-stable and experimental
+branches at the same time.  The description of the beta version should
+make it very clear that the beta is comprised of such changes.  The
+reason for doing these things is to make it easier for people to diff
+between beta versions in order to figure out the changes that were made
+without the diff getting cluttered up by these code cleanliness changes
+that don't change any actual behavior.
+
+@end enumerate
+
+@uref{../../www.666.com/ben,Ben Wing}
+
+@node Future Work -- Improvements to the @code{xemacs.org} Website, Future Work -- Keybindings, Future Work -- Version Number and Development Tree Organization, Future Work
+@section Future Work -- Improvements to the @code{xemacs.org} Website
+@cindex future work, improvements to the @code{xemacs.org} website
+@cindex improvements to the @code{xemacs.org} website, future work
+
+The @code{xemacs.org} web site is the face that XEmacs presents to the
+outside world.  In my opinion, its most important function is to present
+information about XEmacs in such a way that solicits new XEmacs users
+and co-contributors.  Existing members of the XEmacs community can
+probably find out most of the information they want to know about XEmacs
+regardless of what shape the web site is in, or for that matter, perhaps
+even if the web site doesn't exist at all.  However, potential new users
+and co-contributors who go to the XEmacs web site and find it out of
+date and/or lacking the information that they need are likely to be
+turned away and may never return.  For this reason, I think it's
+extremely important that the web site be up-to-date, well-organized, and
+full of information that an inquisitive visitor is likely to want to
+know.
+
+The current XEmacs web site needs a lot of work if it is to meet these
+standards.  I don't think it's reasonable to expect one person to do all
+of this work and make continual updates as needed, especially given the
+dismal record that the XEmacs web site has had.  The proper thing to do
+is to place the web site itself under CVS and allow many of the core
+members to remotely check files in and out.  This way, for example,
+Steve could update the part of the site that contains the current
+release status of XEmacs. (Much of this could be done by a script that
+Steve executes when he sends out a beta release announcement which
+automatically HTML-izes the mail message and puts it in the appropriate
+place on the web site.  There are programs that are specifically
+designed to convert email messages into HTML, for example
+@code{mhonarc}.)  Meanwhile, the @code{xemacs.org} mailing list
+administrator (currently Jason Mastaler, I think) could maintain the
+part of the site that describes the various mailing lists and other
+addresses at @code{xemacs.org}.  Someone like me (perhaps through a
+proxy typist) could maintain the part of the site that specifies the
+future directions that XEmacs is going in, etc., etc.
+
+Here are some things that I think it's very important to add to the web
+site.
+
+@enumerate
+@item
+
+A page describing in detail how to get involved in the XEmacs
+development process, how to submit and where to submit various patches
+to the XEmacs core or associated packages, how to contact the
+maintainers and core developers of XEmacs and the maintainers of various
+packages, etc.
+@item
+
+A page describing exactly how to download, compile, and install XEmacs,
+and how to download and install the various binary distributions.  This
+page should particularly cover in detail how exactly the package system
+works from an installation standpoint and how to correctly compile and
+install under Microsoft Windows and Cygwin.  This latter section should
+cover what compilers are needed under Microsoft Windows and Cygwin, and
+how to get and install the Cygwin components that are needed.
+@item
+
+A page describing where to get the various ancillary libraries that can
+be linked with XEmacs, such as the JPEG, TIFF, PNG, X-Face, DBM, and
+other libraries.  This page should also cover how to correctly compile
+it and install these libraries, including under Microsoft Windows (or at
+least it should contain pointers to where this information can be
+found).  Also, it should describe anything that needs to be specified as
+an option to @code{configure} in order for XEmacs to link with and make
+use of these libraries or of Motif or CDE.  Finally, this page should
+list which versions of the various libraries are required for use with
+the various different beta versions of XEmacs.  (Remember, this can
+change from beta to beta, and someone needs to keep a watchful eye on
+this).
+@item
+
+Pointers to any other sites containing information on XEmacs.  This
+would include, for example, Hrvoje's XEmacs on Windows FAQ and my
+Architecting XEmacs web site.  (Presumably, most of the information in
+this section will be temporary.  Eventually, these pages should be
+integrated into the main XEmacs web site).
+@item
+
+A page listing the various sub-projects in the XEmacs development
+process and who is responsible for each of these sub-projects, for
+example development of the package system, administration of the mailing
+lists, maintenance of stable XEmacs versions, maintenance of the CVS web
+interface, etc.  This page should also list all of the packages that are
+archived at @code{xemacs.org} and who is the maintainer or maintainers
+for each of these packages.
+
+@end enumerate
+
+@subheading Other Places with an XEmacs Presence
+
+We should try to keep an XEmacs presence in all of the major places on
+the web that are devoted to free software or to the "open source"
+community.  This includes, for example, the open source web site at
+@uref{../../opensource.oreilly.com/default.htm,http://opensource.oreilly.com}
+(I'm already in the process of contacting this site), the Freshmeat site
+at @uref{../../www.freshmeat.net/default.htm,http://www.freshmeat.net},
+the various announcement news groups (for example,
+@uref{news:comp.os.linux.announce,comp.os.linux.announce}, and the
+Windows announcement news group) etc.
+
+@uref{../../www.666.com/ben/default.htm,Ben Wing}
+
+@node Future Work -- Keybindings, Future Work -- Byte Code Snippets, Future Work -- Improvements to the @code{xemacs.org} Website, Future Work
+@section Future Work -- Keybindings
+@cindex future work, keybindings
+@cindex keybindings, future work
+
+@menu
+* Future Work -- Keybinding Schemes::  
+* Future Work -- Better Support for Windows Style Key Bindings::  
+* Future Work -- Misc Key Binding Ideas::  
+@end menu
+
+@node Future Work -- Keybinding Schemes, Future Work -- Better Support for Windows Style Key Bindings, Future Work -- Keybindings, Future Work -- Keybindings
+@subsection Future Work -- Keybinding Schemes
+@cindex future work, keybinding schemes
+@cindex keybinding schemes, future work
+
+@strong{Abstract:} We need a standard mechanism that allows a different
+global key binding schemes to be defined.  Ideally, this would be the
+@uref{keyboard-actions.html,keyboard action interface} that I have
+proposed, however this would require a lot of work on the part of mode
+maintainers and other external Elisp packages and will not be rady in
+the short term.  So I propose a very kludgy interface, along the lines
+of what is done in Viper currently.  Perhaps we can rip that key munging
+code out of Viper and make a separate extension that implements a global
+key binding scheme munging feature.  This way a key binding scheme could
+rearrange all the default keys and have all sorts of other code, which
+depends on the standard keys being in their default location, still
+work.
+
+@node Future Work -- Better Support for Windows Style Key Bindings, Future Work -- Misc Key Binding Ideas, Future Work -- Keybinding Schemes, Future Work -- Keybindings
+@subsection Future Work -- Better Support for Windows Style Key Bindings
+@cindex future work, better support for windows style key bindings
+@cindex better support for windows style key bindings, future work
+
+@strong{Abstract:} This page describes how we could create an XEmacs
+extension that modifies the global key bindings so that a Windows user
+would feel at home when using the keyboard in XEmacs.  Some of these
+bindings don't conflict with standard XEmacs keybindings and should be
+added by default, or at the very least under Windows, and probably under
+X Windows as well. Other key bindings would need to be implemented in a
+Windows compatibility extension which can be enabled and disabled on the
+fly, following the conventions outlined in
+@uref{enabling-extensions.html,Standard interface for enabling
+extensions} Ideally, this should be implemented using the
+@uref{keyboard-actions.html,keyboard action interface} but these wil not
+be available in the short term, so we will have to resort to some awful
+kludges, following the model of Michael Kifer's Viper mode.
+
+We really need to make XEmacs provide standard Windows key bindings as
+much as possible.  Currently, for example, there are at least two
+packages that allow the user to make a selection using the shifted arrow
+keys, and neither package works all that well, or is maintained.  There
+should be one well-written piece of code that does this, and it should
+be a standard part of XEmacs.  In fact, it should be turned on by
+default under Windows, and probably under X as well. (As an aside here,
+one point of contention in how to implement this involves what happens
+if you select a region using the shifted arrow keys and then hit the
+regular arrow keys.  Does the region remain selected or not?  I think
+there should be a variable that controls which of these two behaviors
+you want.  We can argue over what the default value of this variable
+should be.  The standard Windows behavior here is to keep the region
+selected, but move the insertion point elsewhere, which is unfortunately
+impossible to implement in XEmacs.)
+
+Some thought should be given to what to do about the standard Windows
+control and alt key bindings.  Under NTEmacs, there is a variable that
+controls whether the alt key behaves like the Emacs meta key, or whether
+it is passed on to the menu as in standard Windows programs.  We should
+surely implement this and put this option on the @strong{Options} menu.
+Making @kbd{Alt-f} for example, invoke the @strong{File} menu, is not
+all that disruptive in XEmacs, because the user can always type @kbd{ESC
+f} to get the meta key functionality.  Making @kbd{Control-x}, for
+example, do @strong{Cut}, is much, much more problematic, of course, but
+we should consider how to implement this anyway.  One possibility would
+be to move all of the current Emacs control key bindings onto
+control-shift plus a key, and to make the simple control keys follow the
+Windows standard as much as possible.  This would mean, for example,
+that we would have the following keybindings:@* @kbd{Control-x} ==>
+@strong{Cut} @* @kbd{Control-c} ==> @strong{Copy} @* @kbd{Control-v} ==>
+@strong{Paste} @* @kbd{Control-z} ==> @strong{Undo}@* @kbd{Control-f}
+==> @strong{Find} @* @kbd{Control-a} ==> @strong{Select All}@*
+@kbd{Control-s} ==> @strong{Save}@* @kbd{Control-p} ==> @strong{Print}@*
+@kbd{Control-y} ==> @strong{Redo}@* (this functionality @emph{is}
+available in XEmacs with Kyle Jones' @code{redo.el} package, but it
+should be better integrated)@* @kbd{Control-n} ==> @strong{New} @*
+@kbd{Control-o} ==> @strong{Open}@* @kbd{Control-w} ==> @strong{Close
+Window}@*
+
+The changes described in the previous paragraph should be put into an
+extension named @code{windows-keys.el} (see
+@uref{enabling-extensions.html,Standard interface for enabling
+extensions}) so that it can be enabled and disabled on the fly using a
+menu item and can be selected as the default for a particular user in
+their custom options file. Once this is implemented, the Windows
+installer should also be modified so that it brings up a dialog box that
+allows the user to make a selection of which key binding scheme they
+would prefer as the default, either the XEmacs standard bindings, Vi
+bindings (which would be Viper mode), Windows-style bindings, Brief,
+CodeWright, Visual C++, or whatever we manage to implement.
+
+@uref{../../www.666.com/ben/default.htm,Ben Wing}
+
+@node Future Work -- Misc Key Binding Ideas,  , Future Work -- Better Support for Windows Style Key Bindings, Future Work -- Keybindings
+@subsection Future Work -- Misc Key Binding Ideas
+@cindex future work, misc key binding ideas
+@cindex misc key binding ideas, future work
+
+@itemize
+@item
+M-123 ... do digit arg
+
+@item
+However, M-( group commands together until M-)
+
+@item
+Nested M-() are allowed.
+
+@item
+Number repeating plus () repeats N times each group of commands as a
+unit.
+
+@item
+M-() by itself forms an anonymous macro, and there should be a
+command to repeat, like VI (execute macro), but when no () before,
+it repeats the last command of same amount of complication - or more
+like, somewhere there is a repeats all command back to make to act
+that stopping like VI's dot command.
+
+@item
+C-numbers switches to a particular window.  maybe 1-3 or 1-4 does
+this.
+
+@item
+C-4 or 5 to 9 (or ()? maybe reserved) switches to a particular frame.
+
+@item
+Possibly C-Sh-numbers select more windows or frames.
+
+@item
+M-C-1
+M-C-2
+M-C-3
+M-C-4
+M-C-5
+M-C-6
+M-C-7
+M-C-8
+M-C-9
+M-C-0
+
+maybe should be execute anonymous macros (other possibility is insert
+register but you can easily simulate with a keyboard macro)
+
+@item
+What about C-S M-C-S M-S??
+
+@item
+I think there should be default fun key binding for @strong{ILLEGIBLE}
+similar to what I have - load, save, cut, copy, paste, kill line,
+start/end macro, do macro
+@end itemize
+
+@node Future Work -- Byte Code Snippets, Future Work -- Lisp Stream API, Future Work -- Keybindings, Future Work
+@section Future Work -- Byte Code Snippets
+@cindex future work, byte code snippets
+@cindex byte code snippets, future work
+
+@itemize
+@item
+For use in time critical (e.g. redisplay) places such as display
+tables - a simple piece of code is evalled, e.g.
+@example
+(int-to-char (1+ c))
+@end example
+where c is the arg, specbound.
+
+@item
+can be compiled like
+@example
+(byte-compile-snippet (int-to-char (1+ c)) (c))
+                                           ^^^
+                                environment of local vars
+@end example
+
+@item
+need eval with bindings (not hard to implement)
+(extendable when lexical scoping present)
+
+@item
+What's the return value of byte-compile-snippet?
+(Look to see how this might be implemented)
+@end itemize
+
+@menu
+* Future Work -- Autodetection::  
+* Future Work -- Conversion Error Detection::  
+* Future Work -- BIDI Support::  
+* Future Work -- Localized Text/Messages::  
+@end menu
+
+@node Future Work -- Autodetection, Future Work -- Conversion Error Detection, Future Work -- Byte Code Snippets, Future Work -- Byte Code Snippets
+@subsection Future Work -- Autodetection
+@cindex future work, autodetection
+@cindex autodetection, future work
+
+There are various proposals contained here.
+
+@subsection New Implementation of Autodetection Mechanism
+
+The current auto detection mechanism in XEmacs Mule has many
+problems. For one thing, it is wrong too much of the time. Another
+problem, although easily fixed, is that priority lists are fixed rather
+than varying, depending on the particular locale; and finally, it
+doesn't warn the user when it's not sure of the encoding or when there's
+a mistake made during decoding. In both of these situations the user
+should be presented with a list of likely encodings and given the
+choice, rather than simply proceeding anyway and giving a result that is
+likely to be wrong and may result in data corruption when the file is
+saved out again.
+
+All coding systems are categorized according to their type. Currently
+this includes ISO2022, Big 5, Shift-JIS, UTF8 and a few others. In
+the future there will be many more types defined and this mechanism
+will be generalized so that it is easily extendable by the Lisp
+programmer.
+
+In general, each coding system type defines a series of subtypes which
+are handled differently for the purpose of detection. For example, ISO
+2022 defines many different subtypes such as 7 bit, 8 bit, locking
+shift, designating and so on. UCS2 may define subtypes such as normal
+and byte reversed.
+
+The detection engine works conceptually by calling the detection
+methods of all of the defined coding system types in parallel on
+successive chunks of data (which may, for example, be 4K in size, but
+where the size makes no difference except for optimization purposes)
+and watching the results until either a definite answer is determined
+or the end of data is reached. The way the definite answer is
+determined will be defined below. The detection method of the coding
+system type is passed some data and a chunk of memory, which the
+method uses to store its current state (and which is maintained
+separately for each coding system type by the detection engine between
+successive calls to the coding system type's detection method). Its
+return value should be an alist consisting of a list of all of the
+defined subtypes for that coding system type along with a level of
+likelihood and a list of additional properties indicating certain
+features detected in the data. The extra properties returned are
+defined entirely by the particular coding system type and are used
+only in the algorithm described below under "user control." However,
+the levels of likelihood have a standard meaning as follows:
+
+Level 4 means "near certainty" and typically indicates that a
+signature has been detected, usually at the beginning of the data,
+indicating that the data is encoded in this particular coding system
+type. An example of this would be the byte order mark at the beginning
+of UCS2 encoded data or the GZIP mark at the beginning of GZIP data.
+
+Level 3 means "highly likely" and indicates that tell-tale signs have
+been discovered in the data that are characteristic of this particular
+coding system type. Examples of this might be ISO 2022 escape
+sequences or the current Unicode end of line markers at regular
+intervals.
+
+Level 2 means "strongly statistically likely" indicating that
+statistical analysis concludes that there's a high chance that this
+data is encoded according to this particular type. For example, this
+might mean that for UCS2 data, there is a high proportion of null bytes
+or other repeated bytes in the odd-numbered bytes of the data and a
+high variance in the even-numbered bytes of the data. For Shift-JIS,
+this might indicate that there were no illegal Shift-JIS sequences
+and a fairly high occurrence of common Shift-JIS characters.
+
+Level 1 means "weak statistical likelihood" meaning that there is some
+indication that the data is encoded in this coding system type. In
+fact, there is a reasonable chance that it may be some other type as
+well. This means, for example, that no illegal sequences were
+encountered and at least some data was encountered that is purposely
+not in other coding system types. For Shift-JIS data, this might mean
+that some bytes in the range 128 to 159 were encountered in the data.
+
+Level 0 means "neutral" which is to say that there's either not enough
+data to make any decision or that the data could well be interpreted
+as this type (meaning no illegal sequences), but there is little or no
+indication of anything particular to this particular type.
+
+Level -1 means "weakly unlikely" meaning that some data was
+encountered that could conceivably be part of the coding system type
+but is probably not. For example, successively long line-lengths or
+very rarely-encountered sequences.
+
+Level -2 means "strongly unlikely" meaning that typically a number
+of illegal sequences were encountered.
+
+The algorithm to determine when to stop and indicate that the data has
+been detected as a particular coding system uses a priority list,
+which is typically specified as part of the language environment
+determined from the current locale or the user's choice. This priority
+list consists of a list of coding system subtypes, along with a
+minimum level required for positive detection and optionally
+additional properties that need to be present. Using the return values
+from all of the detection methods called, the detection engine looks
+through this priority list until it finds a positive match. In this
+priority list, along with each subtype is a particular coding system
+to return when the subtype is encountered. (For example, in a
+Japanese-language environment particular subtypes of ISO 2022 will be
+associated with the Japanese coding system version of those
+subtypes). It is perfectly legal and quite common in fact, to list the
+same subtype more than once in the priority list with successively
+lower requirements. Other facts that can be listed in the priority
+list for a subtype are "reject", meaning that the data should never be
+detected as this subtype, or "ask", meaning that if the data is
+detected to be this subtype, the user will be asked whether they
+actually mean this. This latter property could be used, for example,
+towards the bottom of the priority list.
+
+In addition there is a global variable which specifies the minimum
+number of characters required before any positive match is
+reported. There may actually be more than one such variable for
+different sources of data, for example, detection of files versus
+detection of subprocess data.
+
+Whenever a file is opened and detected to be a particular coding
+system, the subtype, the coding system and the associated level of
+likelihood will be prominently displayed either in the echo area or in
+a status box somewhere.
+
+If no positive match is found according to the priority list, or if
+the matches that are found have the "ask" property on them, then the
+user will be presented with a list of choices of possible encodings
+and asked to choose one. This list is typically sorted first by level
+of likelihood, and then within this, by the order in which the
+subtypes appear in the priority list. This list is displayed in a
+special kind of dialog box or other buffer allowing the user, in
+addition to just choosing a particular encoding, to view what the
+file would look like if it were decoded according to the type.
+
+Furthermore, whenever a file is decoded according to a particular
+type, the decoding engine keeps track of status values that are output
+by the coding system type's decoding method. Generally, this status
+will be in the form of errors or warnings of various levels, some of
+which may be severe enough to stop the decoding entirely, and some of
+which may either indicate definitely malformed data but from which
+it's possible to recover, or simply data that appears rather
+questionable. If any of these status values are reported during
+decoding, the user will be informed of this and asked "are you sure?"
+As part of the "are you sure" dialog box or question, the user can
+display the results of the decoding to make sure it's correct. If the
+user says "no, they're not sure," then the same list of choices as
+previously mentioned will be presented.
+
+@subheading Implementation of Coding System Priority Lists in Various Locales
+
+@example
+@enumerate
+@item
+Default locale
+
+@enumerate
+@item
+Some Unicode (fixed width; maybe UTF-8, too?) may optionally
+be detected by the byte-order-mark magic (if the first two
+bytes are 0xFE 0xFF, the file is Unicode text, if 0xFF 0xFE,
+it is wrong-endian Unicode; if legal in UTF-8, it would be
+0xFE 0xBB 0xBF, either-endian).  This is probably an
+optimization that should not be on by default yet.
+
+@item
+ISO-2022 encodings will be detected as long as they use
+explicit designation of all non-ASCII character sets.  This
+means that many 7-bit ISO-2022 encodings would be detected
+(eg, ISO-2022-JP), but EUC-JP and X Compound Text would not,
+because they implicitly designate character sets.
+
+N.B. Latin-1 will be detected as binary, as for any Latin-*.
+
+N.B. An explicit ISO-2022 designation is semantically
+equivalent to a Content-Type: header.  It is more dangerous
+because shorter, but I think we should recognize them by
+default despite the slight risk; XEmacs is a text editor.
+
+N.B. This is unlikely to be as dangerous as it looks at first
+glance.  Any file that includes an 8-bit-set byte before the
+first valid designation should be detected as binary.
+
+@item
+Binary files will be detected (eg, presence of NULs, other
+non-whitespace control characters, absurdly long lines, and
+presence of bytes >127).
+
+@item
+Everything else is ASCII.
+
+@item
+Newlines will be detected in text files.
+@end enumerate
+
+@item
+European locales
+
+@enumerate
+@item
+Unicode may optionally be detected by the byte-order-mark
+magic.
+
+@item
+ISO-2022 encodings will be detected as long as they use
+explicit designation of all non-ASCII character sets.
+
+@item
+A locale-specific class of 1-byte character sets (eg,
+'(Latin-1)) will be detected.
+
+N.B.  The reason for permitting a class is for cases like
+Cyrillic where there are both ISO-8859 encodings and
+incompatible encodings (KOI-8r) in common use.  If you want to
+write a Latin-1 v. Latin-2 detector, be my guest, but I don't
+think it would be easy or accurate.
+
+@item
+Binary files will be detected per (2)(c), except that only
+8-bit bytes out of the encoding's range imply binary.
+
+@item
+Everything else is ASCII.
+
+@item
+Newlines will be detected in text files.
+@end enumerate
+
+@item
+CJK locales
+
+@enumerate
+@item
+Unicode may optionally be detected by the byte-order-mark
+magic.
+
+@item
+ISO-2022 encodings will be detected as long as they use
+explicit designation of all non-ASCII character sets.
+
+@item
+A locale-specific class of multi-byte and wide-character
+encodings will be detected.
+N.B. No 1-byte character sets (eg, Latin-1) will be detected.
+The reason for a class is to allow the Japanese to let Mule do
+the work of choosing EUC v. SJIS.
+
+@item
+Binary files will be detected per (3)(d).
+
+@item
+Everything else is ASCII.
+
+@item
+Newlines will be detected in text files.
+@end enumerate
+
+@item
+Unicode and general locales; multilingual use
+@end enumerate
+
+@enumerate
+@item
+Hopefully a system general enough to handle (2)--(4) will
+handle these, too, but we should watch out for gotchas like
+Unicode "plane 14" tags which (I think _both_ Ben and Olivier
+will agree) have no place in the internal representation, and
+thus must be treated as out-of-band control sequences.  I
+don't know if all such gotchas will be as easy to dispose of.
+
+@item
+An explicit coding system priority list will be provided to
+allow multilingual users to autodetect both Shift JIS and Big
+5, say, but this ability is not promised by Mule, since it
+would involve (eg) heuristics like picking a set of code
+points that are frequent in Shift JIS and uncommon in Big 5
+and betting that a file containing many characters from that
+set is Shift JIS.
+@end enumerate
+@end example
+
+@subheading Better Algorithm, More Flexibility, Different Levels of Certainty
+
+@subheading Much More Flexible Coding System Priority List, per-Language Environment
+
+@subheading User Ability to Select Encoding when System Unsure or Encounters Errors
+
+@subheading Another Autodetection Proposal
+
+however, in general the detection code has major problems and needs lots
+of work:
+
+@itemize @bullet
+@item
+instead of merely "yes" or "no" for particular categories, we need a
+more flexible system, with various levels of likelihood.  Currently
+I've created a system with six levels, as follows:
+
+[see file-coding.h]
+
+Let's consider what this might mean for an ASCII text detector.  (In
+order to have accurate detection, especially given the iteration I
+proposed below, we need active detectors for @strong{all} types of data we
+might reasonably encounter, such as ASCII text files, binary files,
+and possibly other sorts of ASCII files, and not assume that simply
+"falling back to no detection" will work at all well.)
+
+An ASCII text detector DOES NOT report ASCII text as level 0, since
+that's what the detector is looking for.  Such a detector ideally
+wants all bytes in the range 0x20 - 0x7E (no high bytes!), except for
+whitespace control chars and perhaps a few others; LF, CR, or CRLF
+sequences at regular intervals (where "regular" might mean an average
+< 100 chars and 99% < 300 for code and other stuff of the "text file
+w/line breaks" variety, but for the "text file w/o line breaks"
+variety, excluding blank lines, averages could easily be 600 or more
+with 2000-3000 char "lines" not so uncommon); similar statistical
+variance between odds and evens (not Unicode); frequent occurrences of
+the space character; letters more common than non-letters; etc.  Also
+checking for too little variability between frequencies of characters
+and for exclusion of particular characters based on character ranges
+can catch ASCII encodings like base-64, UUEncode, UTF-7, etc.
+Granted, this doesn't even apply to everything called "ASCII", and we
+could potentially distinguish off ASCII for code, ASCII for text,
+etc. as separate categories.  However, it does give us a lot to work
+off of, in deciding what likelihood to choose -- and it shows there's
+in fact a lot of detectable patterns to look for even in something
+seemingly so generic as ASCII.  The detector would report most text
+files in level 1 or level 2.  EUC encodings, Shift-JIS, etc.  probably
+go to level -1 because they also pass the EOL test and all other tests
+for the ASCII part of the text, but have lots of high bytes, which in
+essence turn them into binary.  Aberrant text files like something in
+BASE64 encoding might get placed in level 0, because they pass most
+tests but fail dramatically the frequency test; but they should not be
+reported as any lower, because that would cause explicit prompting,
+and the user should be able any valid text file without prompting.
+The escape sequences and the base-64-type checks might send 7-bit
+iso2022 to 0, but probably not -1, for similar reasons.
+
+@item
+The assumed algorithm for the above detection levels is to in essence
+sort categories first by detection level and then by priority.
+Perhaps, however, we would want smarter algorithms, or at least
+something user-controllable -- in particular, when (other than no
+category at level 0 or greater) do we prompt the user to pick a
+category?
+
+@item
+Improvements in how the detection algorithm works: we want to handle
+lots of different ways something could be encoded, including multiple
+stacked encodings.  trying to specify a series of detection levels
+(check for base64 first, then check for gzip, then check for an i18n
+decoding, then for crlf) won't generally work.  for example, what
+about the same encoding appearing more than once? for example, take
+euc-jp, base64'd, then gzip'd, then base64'd again: this could well
+happen, and you could specify the encodings specifically as
+base64|gzip|base64|euc-jp, but we'd like to autodetect it without
+worrying about exactly what order these things appear in.  we should
+allow for iterating over detection/decoding cycles until we reach
+some maximum (we got stuck in a loop, due to incorrect category
+tables or detection algorithms), have no reported detection levels
+over -1, or we end up with no change after a decoding pass (i.e. the
+coding system associated with a chosen category was @code{no-conversion}
+or something equivalent).  it might make sense to divide things into
+two phases (internal and external), where the internal phase has a
+separate category list and would probably mostly end up handling EOL
+detection; but the i think about it, the more i disagree.  with
+properly written detectors, and properly organized tables (in
+general, those decodings that are more "distinctive" and thus
+detectable with greater certainty go lower on the list), we shouldn't
+need two phases.  for example, let's say the example above was also
+in CRLF format.  The EOL detector (which really detects *plain text*
+with a particular EOL type) would return at most level 0 for all
+results until the text file is reached, whereas the base64, gzip or
+euc-jp decoders will return higher.  Once the text file is reached,
+the EOL detector will return 0 or higher for the CRLF encoding, and
+all other detectors will return 0 or lower; thus, we will successfully
+proceed through CRLF decoding, or at worst prompt the user. (The only
+external-vs-internal distinction that might make sense here is to
+favor coding systems of the correct source type over those that
+require conversion between external and internal; if done right, this
+could allow the CRLF detector to return level 1 for all CRLF-encoded
+text files, even those that look like Base-64 or similar encoding, so
+that CRLF encoding will always get decoded without prompting, but not
+interfere with other decoders.  On the other hand, this
+external-vs-internal distinction may not matter at all -- with
+automatic internal-external conversion, CRLF decoding can occur
+before or after decoding of euc-jp, base64, iso2022, or similar,
+without any difference in the final results.)
+
+#### What are we trying to say?  In base64, the CRLF decoding before
+base64 decoding is irrelevant, they will be thrown out as whitespace
+is not significant in base64.
+
+[sjt considers all of this to be rather bogus.  Ideas like "greater
+certainty" and "distinctive" can and should be quantified.  The issue
+of proper table organization should be a question of optimization.]
+
+[sjt wonders if it might not be a good idea to use Unicode's newline
+character as the internal representation so that (for non-Unicode
+coding systems) we can catch EOL bugs on Unix too.]
+
+@item
+There need to be two priority lists and two
+category->coding-system lists.  Once is general, the other
+category->langenv-specific.  The user sets the former, the langenv
+category->the latter.  The langenv-specific entries take precedence
+category->over the others.  This works similarly to the
+category->category->Unicode charset priority list.
+
+@item
+The simple list of coding categories per detectors is not enough.
+Instead of coding categories, we need parameters.  For example,
+Unicode might have separate detectors for UTF-8, UTF-7, UTF-16,
+and perhaps UCS-4; or UTF-16/UCS-4 would be one detection type.
+UTF-16 would have parameters such as "little-endian" and "needs BOM",
+and possibly another one like "collapse/expand/leave alone composite
+sequences" once we add this support.  Usually these parameters
+correspond directly to a coding system parameter.  Different
+likelihood values can be specified for each parameter as well as for
+the detection type as a whole.  The user can specify particular
+coding systems for a particular combination of detection type and
+parameters, or can give "default parameters" associated with a
+detection type.  In the latter case, we create a new coding system as
+necessary that corresponds to the detected type and parameters.
+
+@item
+a better means of presentation.  rather than just coming up
+with the new file decoded according to the detected coding
+system, allow the user to browse through the file and
+conveniently reject it if it looks wrong; then detection
+starts again, but with that possibility removed.  in cases where
+certainty is low and thus more than one possibility is presented,
+the user can browse each one and select one or reject them all.
+
+@item
+fail-safe: even after the user has made a choice, if they
+later on realize they have the wrong coding system, they can
+go back, and we've squirreled away the original data so they
+can start the process over.  this may be tricky.
+
+@item
+using a larger buffer for detection.  we use just a small
+piece, which can give quite random results.  we may need to
+buffer up all the data we look through because we can't
+necessarily rewind.  the idea is we proceed until we get a
+result that's at least at a certain level of certainty
+(e.g. "probable") or we reached a maximum limit of how much
+we want to buffer.
+
+@item
+dealing with interactive systems.  we might need to go ahead
+and present the data before we've finished detection, and
+then re-decode it, perhaps multiple times, as we get better
+detection results.
+
+@item
+Clearly some of these are more important than others.  at the
+very least, the "better means of presentation" should be
+implemented as soon as possible, along with a very simple means
+of fail-safe whenever the data is readibly available, e.g. it's
+coming from a file, which is the most common scenario.
+@end itemize
+
+ben [at least that's what sjt thinks]
+
+*****
+
+While this is clearly something of an improvement over earlier designs,
+it doesn't deal with the most important issue: to do better than categories
+(which in the medium term is mostly going to mean "which flavor of Unicode
+is this?"), we need to look at statistical behavior rather than ruling out
+categories via presence of specific sequences.  This means the stream
+processor should
+
+@enumerate
+@item
+keep octet distributions (octet, 2-, 3-, 4- octet sequences)
+@item
+in some kind of compressed form
+@item
+look for "skip features" (eg, characteristic behavior of leading
+bytes for UTF-7, UTF-8, UTF-16, Mule code)
+@item
+pick up certain "simple" regexps
+@item
+provide "triggers" to determine when statistical detectors should be
+invoked, such as octet count
+@item
+and "magic" like Unicode signatures or file(1) magic.
+@end enumerate
+
+--sjt
+
+@node Future Work -- Conversion Error Detection, Future Work -- BIDI Support, Future Work -- Autodetection, Future Work -- Byte Code Snippets
+@subsection Future Work -- Conversion Error Detection
+@cindex future work, conversion error detection
+@cindex conversion error detection, future work
+
+@subheading "No Corruption" Scheme for Preserving External Encoding when Non-Invertible Transformation Applied
+
+A preliminary and simple implementation is:
+
+@quotation
+But you could implement it much more simply and usefully by just
+determining, for any text being decoded into mule-internal, can we go
+back and read the source again?  If not, remember the entire file
+(GNUS message, etc) in text properties.  Then, implement the UI
+interface (like Netscape's) on top of that.  This way, you have
+something that at least works, but it might be inefficient.  All we
+would need to do is work on making the underlying implementation more
+efficient.
+@end quotation
+
+A more detailed proposal for avoiding binary file corruption is
+
+@quotation
+Basic idea: A coding system is a filter converting an entire input
+stream into an output stream. The resulting stream can be said to be
+"correspondent to" the input stream. Similarly, smaller units can
+correspond. These could potentially include zero width intervals on
+either side, but we avoid this.  Specifically, the coding system works
+like:
+
+@example
+loop (input) @{
+
+ Read bytes till we have enough to generate a translated character or a chars.
+
+ This establishes a "correspondence" between the whole input and
+ output more or less in minimal chunks.
+
+@}
+@end example
+
+We then do the following processing:
+
+@enumerate
+@item
+Eliminate correspondences where one or the other of the I/O streams
+has a zero interval by combining with an adjacent interval;
+
+@item
+Group together all adjacent "identity" correspondences into as
+large groups as possible;
+
+@item
+Use text properties to store the non-identity correspondences on
+the characters. For identity correspondences, use a simple text
+property on all that contains no data but just indicates that the
+whole string of text is identity corresponded. (How do we define
+"identity"? Latin 1 or could it be something else? For example,
+Latin 2)?
+
+@item
+Figure out the procedures when text is inserted/deleted and copied
+or pasted.
+
+@item
+Figure out to save the file out making use of the
+correspondences. Allow ways of saving without correspondences, and
+doing a "save to buffer with and without correspondences."  Need to
+be clever when dealing with modal coding systems to parse the
+correspondences to get the internal state right.
+@end enumerate
+@end quotation
+
+@subheading Another Error-Catching Idea
+
+Nov 4, 1999
+
+Finally, I don't think "save the input" is as hard as you make it out to
+be.  Conceptually, in fact, it's simple: for each minimal group of bytes
+where you cannot absolutely guarantee that an external->internal
+transformation is reversible, you put a text property on the
+corresponding internal character indicating the bytes that generated
+this character.  We also put a text property on every character,
+indicating the coding system that caused the transformation.  This
+latter text property is extremely efficient (e.g. in a buffer with no
+data pasted from elsewhere, it will map to a single extent over all the
+buffer), and the former cases should not be prevalent enough to cause a
+lot of inefficiency, esp. if we define what "reversible" means for each
+coding system in such a way that it correctly handles the most common
+cases.  The hardest part, in fact, is making all the string/text
+handling in XEmacs be robust w.r.t. text properties.
+
+@subheading Strategies for Error Annotation and Coding Orthogonalization
+
+From sjt (?):
+
+We really want to separate out a number of things.  Conceptually,
+there is a nested syntax.
+
+At the top level is the ISO 2022 extension syntax, including charset
+designation and invocation, and certain auxiliary controls such as the
+ISO 6429 direction specification.  These are octet-oriented, with the
+single exception (AFAIK) of the "exit Unicode" sequence which uses the
+UTF's natural width (1 byte for UTF-7 and UTF-8, 2 bytes for UCS-2 and
+UTF-16, and 4 bytes for UCS-4 and UTF-32).  This will be treated as a
+(deprecated) special case in Unicode processing.
+
+The middle layer is ISO 2022 character interpretation.  This will depend
+on the current state of the ISO 2022 registers, and assembles octets
+into the character's internal representation.
+
+The lowest level is translating system control conventions.  At present
+this is restricted to newline translation, but one could imagine doing
+tab conversion or line wrapping here.  "Escape from Unicode" processing
+would be done at this level.
+
+At each level the parser will verify the syntax.  In the case of a
+syntax error or warning (such as a redundant escape sequence that affects
+no characters), the parser will take some action, typically inserting the
+erroneous octets directly into the output and creating an annotation
+which can be used by higher level I/O to mark the affected region.
+
+This should make it possible to do something sensible about separating
+newline convention processing from character construction, and about
+preventing ISO 2022 escape sequences from being recognized
+inappropriately.
+
+The basic strategy will be to have octet classification tables, and
+switch processing according to the table entry.
+
+It's possible that, by doing the processing with tables of functions or
+the like, the parser can be used for both detection and translation.
+
+@subheading Handling Writing a File Safely, Without Data Loss
+
+From ben:
+
+@quotation
+When writing a file, we need error detection; otherwise somebody
+will create a Unicode file without realizing the coding system
+of the buffer is Raw, and then lose all the non-ASCII/Latin-1
+text when it's written out.  We need two levels
+
+@enumerate
+@item
+first, a "safe-charset" level that checks before any actual
+encoding to see if all characters in the document can safely
+be represented using the given coding system.  FSF has a
+"safe-charset" property of coding systems, but it's stupid
+because this information can be automatically derived from
+the coding system, at least the vast majority of the time.
+What we need is some sort of
+alternative-coding-system-precedence-list, langenv-specific,
+where everything on it can be checked for safe charsets and
+then the user given a list of possibilities.  When the user
+does "save with specified encoding", they should see the same
+precedence list.  Again like with other precedence lists,
+there's also a global one, and presumably all coding systems
+not on other list get appended to the end (and perhaps not
+checked at all when doing safe-checking?).  safe-checking
+should work something like this: compile a list of all
+charsets used in the buffer, along with a count of chars
+used.  that way, "slightly unsafe" coding systems can perhaps
+be presented at the end, which will lose only a few characters
+and are perhaps what the users were looking for.
+
+[sjt sez this whole step is a crock.  If a universal coding system
+is unacceptable, the user had better know what he/she is doing,
+and explicitly specify a lossy encoding.
+In principle, we can simply check for characters being writable as
+we go along.  Eg, via an "unrepresentable character handler."  We
+still have the buffer contents.  If we can't successfully save,
+then ask the user what to do.  (Do we ever simply destroy previous
+file version before completing a write?)]
+
+@item
+when actually writing out, we need error checking in case an
+individual char in a charset can't be written even though the
+charsets are safe.  again, the user gets the choice of other
+reasonable coding systems.
+
+[sjt -- something is very confused, here; safe charsets should be
+defined as those charsets all of whose characters can be encoded.]
+
+@item
+same thing (error checking, list of alternatives, etc.) needs
+to happen when reading!  all of this will be a lot of work!
+@end enumerate
+@end quotation
+
+--ben
+
+I don't much like Ben's scheme.  First, this isn't an issue of I/O,
+it's a coding issue.  It can happen in many places, not just on stream
+I/O.  Error checking should take place on all translations.  Second,
+the two-pass algorithm should be avoided if possible.  In some cases
+(eg, output to a tty) we won't be able to go back and change the
+previously output data.  Third, the whole idea of having a buffer full
+of arbitrary characters which we're going to somehow shoehorn into a
+file based on some twit user's less than informed idea of a coding system
+is kind of laughable from the start.  If we're going to say that a buffer
+has a coding system, shouldn't we enforce restrictions on what you can
+put into it?  Fourth, what's the point of having safe charsets if some
+of the characters in them are unsafe?  Fifth, what makes you think we're
+going to have a list of charsets?  It seems to me that there might be
+reasons to have user-defined charsets (eg, "German" vs "French" subsets
+of ISO 8859/15).  Sixth, the idea of having language environment determine
+precedence doesn't seem very useful to me.  Users who are working with a
+language that corresponds to the language environment are not going to
+run into safe charsets problems.  It's users who are outside of their
+usual language environment who run into trouble.  Also, the reason for
+specifying anything other than a universal coding system is normally
+restrictions imposed by other users or applications.  Seventh, the
+statistical feedback isn't terribly useful.  Users rarely "want" a
+coding system, they want their file saved in a useful way.  We could
+add a FORCE argument to conversions for those who really want a specific
+coding system.  But mostly, a user might want to edit out a few unsafe
+characters.  So (up to some maximum) we should keep a list of unsafe
+text positions, and provide a convenient function for traversing them.
+
+--sjt
+
+@node Future Work -- BIDI Support, Future Work -- Localized Text/Messages, Future Work -- Conversion Error Detection, Future Work -- Byte Code Snippets
+@subsection Future Work -- BIDI Support
+@cindex future work, bidi support
+@cindex bidi support, future work
+
+@enumerate
+@item
+Use text properties to handle nesting levels, overrides
+BIDI-specific text properties (as per Unicode BIDI algorithm)
+computed at text insertion time.
+
+@item
+Lisp API for reordering a display line at redisplay time,
+possibly substitution of different glyphs (esp. mirroring of
+glyphs).
+
+@item
+Lisp API called after a display line is laid out, but only when
+reordering may be necessary (display engine checks for
+non-uniform BIDI text properties; can handle internally a line
+that's completely in one direction)
+
+@item
+Default direction is a buffer-local variable
+
+@item
+We concentrate on implementing Unicode BIDI algorithm.
+
+@item
+Display support for mirroring of entire window
+
+@item
+Display code keeps track of mirroring junctures so it can
+display double cursor.
+
+@item
+Entire layout of screen (on a per window basis) is exported as a
+Lisp API, for visual editing (also very useful for other
+purposes e.g. proper handling of word wrapping with proportional
+fonts, complex Lisp layout engines e.g. W3)
+
+@item
+Logical, visual, etc. cursor movement handled entirely in Lisp,
+using aforementioned API, plus a specifier for controlling how
+cursor is shown (e.g. split or not).
+@end enumerate
+
+@node Future Work -- Localized Text/Messages,  , Future Work -- BIDI Support, Future Work -- Byte Code Snippets
+@subsection Future Work -- Localized Text/Messages
+@cindex future work, localized text/messages
+@cindex localized text/messages, future work
+
+NOTE: There is existing message translation in X Windows of menu names.
+This is handled through X resources.  The files are in
+@file{PACKAGES/mule-packages/locale/app-defaults/LOCALE/Emacs}, where
+@var{locale} is @samp{ja}, @samp{fr}, etc.
+
+See lib-src/make-msgfile.lex.
+
+Long comment from jwz, some additions from ben marked "ben":
+
+(much of this comment is outdated, and a lot of it is actually
+implemented)
+
+@subsection Proposal for How This All Ought to Work
+
+this isn't implemented yet, but this is the plan-in-progress
+
+In general, it's accepted that the best way to internationalize is for all
+messages to be referred to by a symbolic name (or number) and come out of a
+table or tables, which are easy to change.
+
+However, with Emacs, we've got the task of internationalizing a huge body
+of existing code, which already contains messages internally.
+
+For the C code we've got two options:
+
+@itemize @bullet
+@item
+Use a Sun-like @code{gettext()} form, which takes an "english" string which
+appears literally in the source, and uses that as a hash key to find
+a translated string;
+@item
+Rip all of the strings out and put them in a table.
+@end itemize
+
+In this case, it's desirable to make as few changes as possible to the C
+code, to make it easier to merge the code with the FSF version of emacs
+which won't ever have these changes made to it.  So we should go with the
+former option.
+
+The way it has been done (between 19.8 and 19.9) was to use @code{gettext()}, but
+@strong{also} to make massive changes to the source code.  The goal now is to use
+@code{gettext()} at run-time and yet not require a textual change to every line
+in the C code which contains a string constant.  A possible way to do this
+is described below.
+
+(@code{gettext()} can be implemented in terms of @code{catgets()} for non-Sun systems, so
+that in itself isn't a problem.)
+
+For the Lisp code, we've got basically the same options: put everything in
+a table, or translate things implicitly.
+
+Another kink that lisp code introduces is that there are thousands of third-
+party packages, so changing the source for all of those is simply not an
+option.
+
+Is it a goal that if some third party package displays a message which is
+one we know how to translate, then we translate it?  I think this is a
+worthy goal.  It remains to be seen how well it will work in practice.
+
+So, we should endeavor to minimize the impact on the lisp code.  Certain
+primitive lisp routines (the stuff in lisp/prim/, and especially in
+cmdloop.el and minibuf.el) may need to be changed to know about translation,
+but that's an ideologically clean thing to do because those are considered
+a part of the emacs substrate.
+
+However, if we find ourselves wanting to make changes to, say, RMAIL, then
+something has gone wrong.  (Except to do things like remove assumptions
+about the order of words within a sentence, or how pluralization works.)
+
+There are two parts to the task of displaying translated strings to the 
+user: the first is to extract the strings which need to be translated from
+the sources; and the second is to make some call which will translate those
+strings before they are presented to the user.
+
+The old way was to use the same form to do both, that is, @code{GETTEXT()} was both
+the tag that we searched for to build a catalog, and was the form which did
+the translation.  The new plan is to separate these two things more: the
+tags that we search for to build the catalog will be stuff that was in there
+already, and the translation will get done in some more centralized, lower
+level place.
+
+This program (make-msgfile.c) addresses the first part, extracting the 
+strings.
+
+For the emacs C code, we need to recognize the following patterns:
+
+@example
+  message ("string" ... )
+  error ("string")
+  report_file_error ("string" ... )
+  signal_simple_error ("string" ... )
+  signal_simple_error_2 ("string" ... )
+  
+  build_translated_string ("string")
+  #### add this and use it instead of @code{build_string()} in some places.
+  
+  yes_or_no_p ("string" ... )
+  #### add this instead of funcalling Qyes_or_no_p directly.
+
+  barf_or_query_if_file_exists	#### restructure this
+  check all callers of Fsignal	#### restructure these
+  signal_error (Qerror ... )		#### change all of these to @code{error()}
+  
+  And we also parse out the @code{interactive} prompts from @code{DEFUN()} forms.
+  
+  #### When we've got a string which is a candidate for translation, we
+  should ignore it if it contains only format directives, that is, if
+  there are no alphabetic characters in it that are not a part of a `%'
+  directive.  (Careful not to translate either "%s%s" or "%s: ".)
+@end example
+
+For the emacs Lisp code, we need to recognize the following patterns:
+
+@example
+  (message "string" ... )
+  (error "string" ... )
+  (format "string" ... )
+  (read-from-minibuffer "string" ... )
+  (read-shell-command "string" ... )
+  (y-or-n-p "string" ... )
+  (yes-or-no-p "string" ... )
+  (read-file-name "string" ... )
+  (temp-minibuffer-message "string")
+  (query-replace-read-args "string" ... )
+@end example
+  
+I expect there will be a lot like the above; basically, any function which
+is a commonly used wrapper around an eventual call to @code{message} or
+@code{read-from-minibuffer} needs to be recognized by this program.
+
+
+@example
+  (dgettext "domain-name" "string")		#### do we still need this?
+  
+  things that should probably be restructured:
+    @code{princ} in cmdloop.el
+    @code{insert} in debug.el
+    face-interactive
+    help.el, syntax.el all messed up
+@end example
+  
+ben: (format) is a tricky case.  If I use format to create a string
+that I then send to a file, I probably don't want the string translated.
+On the other hand, If the string gets used as an argument to (y-or-n-p)
+or some such function, I do want it translated, and it needs to be
+translated before the %s and such are replaced.  The proper solution
+here is for (format) and other functions that call gettext but don't
+immediately output the string to the user to add the translated (and
+formatted) string as a string property of the object, and have
+functions that output potentially translated strings look for a
+"translated string" property.  Of course, this will fail if someone
+does something like
+
+@example
+   (y-or-n-p (concat (if you-p "Do you " "Does he ")
+     		(format "want to delete %s? " filename))))
+@end example
+
+But you shouldn't be doing things like this anyway.
+
+ben: Also, to avoid excessive translating, strings should be marked
+as translated once they get translated, and further calls to gettext
+don't do any more translating.  Otherwise, a call like
+
+@example
+   (y-or-n-p (format "Delete %s? " filename))
+@end example
+
+would cause translation on both the pre-formatted and post-formatted
+strings, which could lead to weird results in some cases (y-or-n-p
+has to translate its argument because someone could pass a string to
+it directly).  Note that the "translating too much" solution outlined
+below could be implemented by just marking all strings that don't
+come from a .el or .elc file as already translated.
+
+Menu descriptors: one way to extract the strings in menu labels would be
+to teach this program about "^(defvar .*menu\n" forms; that's probably
+kind of hard, though, so perhaps a better approach would be to make this
+program recognize lines of the form
+
+@example
+  "string" ... ;###translate
+@end example
+
+where the magic token ";###translate" on a line means that the string 
+constant on this line should go into the message catalog.  This is analogous
+to the magic ";###autoload" comments, and to the magic comments used in the
+EPSF structuring conventions.
+
+-----
+So this program manages to build up a catalog of strings to be translated.
+To address the second part of the problem, of actually looking up the
+translations, there are hooks in a small number of low level places in
+emacs.
+
+Assume the existence of a C function gettext(str) which returns the 
+translation of @var{str} if there is one, otherwise returns @var{str}.
+
+@itemize @bullet
+@item
+@code{message()} takes a char* as its argument, and always filters it through
+@code{gettext()} before displaying it.
+
+@item
+errors are printed by running the lisp function @code{display-error} which
+doesn't call @code{message} directly (it princ's to streams), so it must be
+carefully coded to translate its arguments.  This is only a few lines
+of code.
+
+@item
+@code{Fread_minibuffer_internal()} is the lowest level interface to all minibuf
+interactions, so it is responsible for translating the value that will go
+into Vminibuf_prompt.
+
+@item
+Fpopup_menu filters the menu titles through @code{gettext()}.
+
+The above take care of 99% of all messages the user ever sees.
+
+@item
+The lisp function temp-minibuffer-message translates its arg.
+
+@item
+query-replace-read-args is funny; it does
+  (setq from (read-from-minibuffer (format "%s: " string) ... ))
+  (setq to (read-from-minibuffer (format "%s %s with: " string from) ... ))
+@end itemize
+
+What should we do about this?  We could hack query-replace-read-args to
+translate its args, but might this be a more general problem?  I don't
+think we ought to translate all calls to format.  We could just change
+the calling sequence, since this is odd in that the first %s wants to be
+translated but the second doesn't.
+
+Solving the "translating too much" problem:
+
+The concern has been raised that in this situation:
+
+@itemize @bullet
+@item
+"Help" is a string for which we know a translation;
+@item
+someone visits a file called Help, and someone does something 
+contrived like (error buffer-file-name)
+@end itemize
+
+then we would display the translation of Help, which would not be correct.
+We can solve this by adding a bit to Lisp_String objects which identifies
+them as having been read as literal constants from a .el or .elc file (as
+opposed to having been constructed at run time as it would in the above 
+case.)  To solve this:
+
+@example
+  - @code{Fmessage()} takes a lisp string as its first argument.
+    If that string is a constant, that is, was read from a source file
+    as a literal, then it calls @code{message()} with it, which translates.
+    Otherwise, it calls @code{message_no_translate()}, which does not translate.
+
+  - @code{Ferror()} (actually, @code{Fsignal()} when condition is Qerror) works similarly.
+@end example
+
+More specifically, we do:
+
+@quotation
+ Scan specified C and Lisp files, extracting the following messages:
+
+@example
+   C files:
+      GETTEXT (...)
+      DEFER_GETTEXT (...)
+      DEFUN interactive prompts
+   Lisp files:
+      (gettext ...)
+      (dgettext "domain-name" ...)
+      (defer-gettext ...)
+      (interactive ...)
+@end example
+
+The arguments given to this program are all the C and Lisp source files
+of GNU Emacs.  .el and .c files are allowed.  There is no support for .elc
+files at this time, but they may be specified; the corresponding .el file
+will be used.  Similarly, .o files can also be specified, and the corresponding
+.c file will be used.  This helps the makefile pass the correct list of files.
+
+The results, which go to standard output or to a file specified with -a or -o
+(-a to append, -o to start from nothing), are quoted strings wrapped in
+gettext(...).  The results can be passed to xgettext to produce a .po message
+file.
+
+However, we also need to do the following:
+
+@enumerate
+@item
+Definition of Arg below won't handle a generalized argument
+as might appear in a function call.  This is fine for DEFUN
+and friends, because only simple arguments appear there; but
+it might run into problems if Arg is used for other sorts
+of functions.
+@item
+@code{snarf()} should be modified so that it doesn't output null
+strings and non-textual strings (see the comment at the top
+of make-msgfile.c).
+@item
+parsing of (insert) should snarf all of the arguments.
+@item
+need to add set-keymap-prompt and deal with gettext of that.
+@item
+parsing of arguments should snarf all strings anywhere within
+the arguments, rather than just looking for a string as the
+argument.  This allows if statements as arguments to get parsed.
+@item
+@code{begin_paren_counting()} et al. should handle recursive entry.
+@item
+handle set-window-buffer and other such functions that take
+a buffer as the other-than-first argument.
+@item
+there is a fair amount of work to be done on the C code.
+Look through the code for #### comments associated with
+'#ifdef I18N3' or with an I18N3 nearby.
+@item
+Deal with @code{get-buffer-process} et al.
+@item
+Many of the changes in the Lisp code marked
+'rewritten for I18N3 snarfing' should be undone once (5) is
+implemented.
+@item
+Go through the Lisp code in prim and make sure that all
+strings are gettexted as necessary.  This may reveal more
+things to implement.
+@item
+Do the equivalent of (8) for the Lisp code.
+@item
+Deal with parsing of menu specifications.
+@end enumerate
+@end quotation
+
+@node Future Work -- Lisp Stream API, Future Work -- Multiple Values, Future Work -- Byte Code Snippets, Future Work
+@section Future Work -- Lisp Stream API
+@cindex future work, Lisp stream API
+@cindex Lisp stream API, future work
+
+Expose XEmacs internal lstreams to Lisp as stream objects.  (In
+addition to the functions given below, each stream object has
+properties that can be associated with it using the standard put, get
+etc. API.  For GNU Emacs, where put and get have not been extended to
+be general property functions, but work only on strings, we would have
+to create functions set-stream-property, stream-property,
+remove-stream-property, and stream-properties.  These provide the same
+functionality as the generic get, put, remprop, and object-plist
+functions under XEmacs)
+
+(Implement properties using a hash table, and @strong{generalize} this so
+that it is extremely easy to add a property interface onto any kind
+of object)
+
+@example  
+(write-stream STREAM STRING)
+@end example
+
+Write the STRING to the STREAM.  This will signal an error if all the
+bytes cannot be written.
+
+@example
+(read-stream STREAM &optional N SEQUENCE)
+@end example
+
+Reads data from STREAM.  N specifies the number of bytes or
+characters, depending on the stream.  SEQUENCE specifies where to
+write the data into.  If N is not specified, data is read until end of
+file.  If SEQUENCE is not specified, the data is returned as a stream.
+If SEQUENCE is specified, the SEQUENCE must be large enough to hold
+the data.
+
+@example
+(push-stream-marker STREAM)
+@end example
+
+   returns ID, probably a stream marker object
+
+@example
+(pop-stream-marker STREAM)
+@end example
+
+   backs up stream to last marker
+
+@example
+(unread-stream STREAM STRING)
+@end example
+
+The only valid STREAM is an input stream in which case the data in
+STRING is pushed back and will be read ahead of all other data.  In
+general, there is no limit to the amount of data that can be unread or
+the number of times that unread-stream can be called before another
+read.
+
+@example
+(stream-available-chars STREAM)
+@end example
+
+This returns the number of characters (or bytes) that can definitely
+be read from the screen without an error.  This can be useful, for
+example, when dealing with non-blocking streams when an attempt to
+read too much data will result in a blocking error.
+
+@example
+(stream-seekable-p STREAM)
+@end example
+
+Returns true if the stream is seekable.  If false, operations such as
+seek-stream and stream-position will signal an error.  However, the
+functions set-stream-marker and seek-stream-marker will still succeed
+for an input stream.
+
+@example
+(stream-position STREAM)
+@end example
+
+If STREAM is a seekable stream, returns a position which can be passed
+to seek-stream.
+
+@example
+(seek-stream STREAM N)
+@end example
+
+If STREAM is a seekable stream, move to the position indicated by N,
+otherwise signal an error.
+
+@example
+(set-stream-marker STREAM)
+@end example
+
+If STREAM is an input stream, create a marker at the current position,
+which can later be moved back to.  The stream does not need to be a
+seekable stream.  In this case, all successive data will be buffered
+to simulate the effect of a seekable stream.  Therefore use this
+function with care.
+
+@example
+(seek-stream-marker STREAM marker)
+@end example
+
+Move the stream back to the position that was stored in the marker
+object. (this is generally an opaque object of type stream-marker).
+
+@example
+(delete-stream-marker MARKER)
+@end example
+
+Destroy the stream marker and if the stream is a non-seekable stream
+and there are no other stream markers pointing to an earlier position,
+frees up some buffering information.
+
+@example
+(delete-stream STREAM N)
+@end example
+
+@example
+(delete-stream-marker STREAM ID)
+@end example
+
+@example
+(close-stream stream)
+@end example
+
+Writes any remaining data to the stream and closes it and the object
+to which it's attached.  This also happens automatically when the
+stream is garbage collected.
+
+@example
+(getchar-stream STREAM)
+@end example
+
+Return a single character from the stream. (This may be a single byte
+depending on the nature of the stream).  This is actually a macro with
+an extremely efficient implementation (as efficient as you can get in
+Emacs Lisp), so that this can be used without fear in a loop.  The
+implementation works by reading a large amount of data into a vector
+and then simply using the function AREF to read characters one by one
+from the vector.  Because AREF is one of the primitives handled
+specially by the byte interpreter, this will be very efficient.  The
+actual implementation may in fact use the function
+call-with-condition-handler to avoid the necessity of checking for
+overflow.  Its typical implementation is to fetch the vector
+containing the characters as a stream property, as well as the index
+into that vector.  Then it retrieves the character and increments the
+value and stores it back in the stream.  As a first implementation, we
+check to see when we are reading the character whether the character
+would be out of range.  If so, we read another 4096 characters,
+storing them into the same vector, setting the index back to the
+beginning, and then proceeding with the rest of the getchar algorithm.
+
+@example
+(putchar-stream STREAM CHAR)
+@end example
+
+This is similar to getchar-stream but it writes data instead of
+reading data.
+
+@example
+Function make-stream
+@end example
+
+There are actually two stream-creation functions, which are:
+
+@example
+(make-input-stream TYPE PROPERTIES)
+(make-output-stream TYPE PROPERTIES)
+@end example
+
+These can be used to create a stream that reads data, or writes data,
+respectively.  PROPERTIES is a property list and the allowable
+properties in it are defined by the type.  Possible types are:
+
+@enumerate
+@item
+@code{file} (this reads data from a file or writes to a file)
+
+Allowable properties are:
+
+@table @code
+@item :file-name
+(the name of the file)
+
+@item :create
+(for output streams only, creates the file if it doesn't
+already exist)
+
+@item :exclusive
+(for output streams only, fails if the file already
+exists)
+
+@item :append
+(for output streams only; starts appending to the end
+of the file rather than overwriting the file)
+
+@item :offset
+(positions in bytes in the file where reading or writing
+should begin.  If unspecified, defaults to the beginning of the
+file or to the end of the file when :appended specified)
+
+@item :count
+(for input streams only, the number of bytes to read from
+the file before signaling "end of file".  If nil or omitted, the
+number of bytes is unlimited)
+
+@item :non-blocking
+(if true, reads or writes will fail if the operation
+would block.  This only makes sense for non-regular files).
+@end table
+
+@item
+@code{process} (For output streams only, send data to a process.)
+
+Allowable properties are:
+
+@table @code
+@item :process
+(the process object)
+@end table
+
+@item
+@code{buffer}  (Read from or write to a buffer.)
+
+Allowable properties are:
+
+@table @code
+@item :buffer
+(the name of the buffer or the buffer object.)
+
+@item :start
+(the position to start reading from or writing to.  If nil,
+use the buffer point.  If true, use the buffer's point and move
+point beyond the end of the data read or written.)
+
+@item :end
+(only for input streams, the position to stop reading at.  If
+nil, continue to the end of the buffer.)
+
+@item :ignore-accessible
+(if true, the default for :start and :end
+ignore any narrowing of the buffer.)
+@end table
+
+@item
+@code{stream} (read from or write to a lisp stream)
+
+Allowable properties are:
+
+@table @code
+@item :stream
+(the stream object)
+
+@item :offset
+(the position to begin to be reading from or writing to)
+
+@item :length
+(For input streams only, the amount of data to read,
+defaulting to the rest of the data in the string.  Revise string
+for output streams only if true, the stream is resized as
+necessary to accommodate data written off the end, otherwise the
+writes will fail.
+@end table
+
+@item
+@code{memory} (For output only, writes data to an internal memory
+buffer.  This is more lightweight than using a Lisp buffer.  The
+function memory-stream-string can be used to convert the memory
+into a string.)
+
+@item
+@code{debugging} (For output streams only, write data to the debugging
+output.)
+
+@item
+@code{stream-device} (During non-interactive invocations only, Read
+from or write to the initial stream terminal device.)
+
+@item
+@code{function} (For output streams only, send data by calling a
+function, exactly as with the STREAM argument to the print
+primitive.)
+
+Allowable Properties are:
+
+@table @code
+@item :function
+(the function to call.  The function is called with one
+argument, the stream.)
+@end table
+
+@item
+@code{marker} (Write data to the location pointed to by a marker and
+move the marker past the data.)
+
+Allowable properties are:
+
+@table @code
+@item :marker
+(the marker object.)
+@end table
+
+@item
+@code{decoding} (As an input stream, reads data from another stream and
+decodes it according to a coding system.  As an output stream
+decodes the data written to it according to a coding system and
+then writes results in another stream.)
+
+Properties are:
+
+@table @code
+@item :coding-system
+(the symbol of coding system object, which defines the
+decoding.)
+
+@item :stream
+(the stream on the other end.)
+@end table
+
+@item
+@code{encoding} (As an input stream, reads data from another stream and
+encodes it according to a coding system.  As an output stream
+encodes the data written to it according to a coding system and
+then writes results in another stream.)
+
+Properties are:
+
+@table @code
+@item :coding-system
+(the symbol of coding system object, which defines the
+encoding.)
+
+@item :stream
+(the stream on the other end.)
+@end table
+@end enumerate
+
+Consider
+
+@example
+(define-stream-type 'type
+  :read-function
+  :write-function
+  :rewind-
+  :seek-
+  :tell-
+  (?:buffer)
+@end example
+
+Old Notes:
+
+Expose lstreams as hash (put get etc. properties) table.
+
+@example  
+  (write-stream stream string)
+  (read-stream stream &optional n sequence)
+  (make-stream ...)
+  (push-stream-marker stream)
+     returns ID prob a stream marker object
+  (pop-stream-marker stream)
+     backs up stream to last marker
+  (unread-stream stream string)
+  (stream-available-chars stream)
+  (seek-stream stream n)
+  (delete-stream stream n)
+  (delete-stream-marker stream ic) can always be poe only nested if you
+    have set stream marker
+  
+  (get-char-stream @strong{generalizes} stream)
+  
+  a macro that tries to be efficient perhaps by reading the next
+  e.g. 512 characters into a vector and arefing them.  Might check aref
+  optimization for vectors in the byte interpreter.
+  
+  (make-stream 'process :process ... :type write)
+  
+  Consider
+  
+  (define-stream-type 'type
+    :read-function
+    :write-function
+    :rewind-
+    :seek-
+    :tell-
+    (?:buffer)
+@end example
+  
+@node Future Work -- Multiple Values, Future Work -- Macros, Future Work -- Lisp Stream API, Future Work
+@section Future Work -- Multiple Values
+@cindex future work, multiple values
+@cindex multiple values, future work
+
+On low level, all funs that can return multiple values are defined
+with DEFUN_MULTIPLE_VALUES and have an extra parameter, a struct
+mv_context *.
+
+It has to be this way to ensure that only the fun itself, and no called
+funs, think they're called in an mv context.
+
+apply, funcall, eval might propagate their mv context to their
+children?
+
+Might need eval-mv to implement calling a fun in an mv context.  Maybe
+also funcall_mv? apply_mv?
+
+Generally, just set up context appropriately.  Call fun (noticing
+whether it's an mv-aware fun) and binding values on the way back or
+passing them out.  (e.g. to multiple-value-bind)
+
+@subheading Common Lisp multiple values, required for specifier improvements.
+
+The multiple return values from get-specifier should allow the
+specifier value to be modified in the correct fashion (i.e.  should
+interact correctly with all manner of changes from other callers)
+using set-specifier.  We should check this and see if we need other
+return values.  (how-to-add? inst-list?)
+
+In C, call multiple-values-context to get number of expected values,
+and multiple-value-set (#, value) to get values other than the first.
+
+(Returns Qno_value, or something, if there are no values.
+
+#### Or should throw?  Probably not.
+#### What happens if a fn returns no values but the caller expects a
+#### value?
+
+Something like @code{funcall_with_multiple_values()} for setting up the
+context.
+
+For efficiency, byte code could notice Ffuncall to m.v. functions and
+sub in special opcodes during load in processing, if it mattered.
+  
+@node Future Work -- Macros, Future Work -- Specifiers, Future Work -- Multiple Values, Future Work
+@section Future Work -- Macros
+@cindex future work, macros
+@cindex macros, future work
+
+@enumerate
+@item
+Option to control whether beep really kills a macro execution.
+@item
+Recently defined macros are remembered on a stack, so accidentally
+defining another one doesn't fuck you up.  You can "rotate"
+anonymous macros or just pick one (numbered) to put on tags, so it
+works with execute macro - menu shows the anonymous macro, and
+lists some keystrokes.  Normally numbered but you can easily assign
+to named fun or to keyboard sequence or give it a number (or give
+it a letter accelerator?)
+@end enumerate
+
+@node Future Work -- Specifiers, Future Work -- Display Tables, Future Work -- Macros, Future Work
+@section Future Work -- Specifiers
+@cindex future work, specifiers
+@cindex specifiers, future work
+
+@subheading Ideas To Work On When Their Time Has Come
+
+@itemize
+@item
+specifier-instance returns additional params (multiple-value) - the instantiator
+used, the associated tag set, the locale found in, a code that can
+be passed in as an additional param RESTART to restart an
+instantiation process, e.g. to allow an instantiator to "inherit"
+from another one higher up.  Also, domain can be 'global (look only
+in global specs) or "complex" - a list of the actual locales to look
+in (e.g. a buffer - frame - a device - 'global)
+
+@item
+pragmatic-specifier-domain (locale)
+Converts a locale into a domain in a way that's "pragmatic" - does
+what most users expect will happen, but is not clean.  In
+particular, handling of "buffer" requires trickiness, as mentioned
+before.
+
+@item
+ensure-instantiator-exists (specifier locale)
+Ensures an actual instantiator exists in a locale, so that it can
+later be futzed with.  If none exists, one is constructed by first
+calling pragmatic-specifier domain and then specifier-instance and
+fetching out the instantiator for this call.
+
+@item
+map-modifying-instantiators (specifier fun &optional locale tag-set)
+Same args as map-specifier, but use the return value from the fun to
+replace the instantiator.  Called with three args (instantiator
+locale tag-set)
+
+@item
+map-modifying-instantiators-force (specifier fun &optional locale tag-set)
+Same as previous, but calls ensure-instantiator-exists on each
+locale before processing.
+@end itemize
+
+NOTE:  Can do preliminary implementation without Multiple Values -
+instead create fun specifier-instance - that returns a list (and will
+be deleted at some point)
+
+@subheading specifier &c changes for glyphs
+
+@enumerate
+@item
+@itemize @bullet
+@item
+resizable vectors with funs to insert, delete elements (elements
+shift accordingly)
+@item
+gap array vectors as an implementation of resizing vectors.
+@end itemize
+
+@item
+You can @code{put} @code{get}, etc. on vectors to modify properties within
+them.
+
+@item
+copy-over routines
+routines that carefully copy one complex item OVER another one,
+destroying the second in the process.  I wrote one for lists.  Need
+a general copy-over-tree.
+
+@item
+improvement to specifier mapping routines e.g.
+
+map-modifying-instantiator and its force versions below, so that we
+could implement in turns.
+
+@item
+put-specifier-property (specifier which finds the key, value
+instantiator in the locale, &opt locale possibly creating one
+tag-set) if necessary and goes into the vector, changes it, and
+puts it back into the specifier.
+
+@item
+Smarter add-spec-to-specifier
+
+If it notices that it's just replacing one instantiator with
+another, instead of just copy-tree the first one and throw away the
+other, use copy-over-tree to save lots of garbage when repeatedly
+called.
+
+ILLEGIBLE: GOTO LOO BUI BUGS LAST PNOTE
+
+@item
+When at image instantiate:
+@itemize @bullet
+@item
+Some properties in the instantiators could be implemented through
+dynamically modifying an existing image instance (e.g. when the
+value of a slider or progress bar or text in a text field
+changes).  So when we hash, we only hash the part of the
+instantiator that cannot be dynamically modified (We might need
+to do something tricky here - allowing a :key property in hash
+tables or @strong{ILLEGIBLE}).  Anyway, so we need to generate an image
+instance, and we mask off the dynamic properties and look up in
+our hash table, and we get something back!  But is it ours to
+modify?  (We already checked to see it wasn't exactly the same
+dynamic properties that it had)  Thus ---
+@end itemize
+
+@item
+Reference counting.  Somehow or other, each image instance in the
+cache needs to keep track of the instantiators that generated it.
+@end enumerate
+
+It might do this through some sort of special instantiator-reference
+object.  This points to the instantiator, where in the hierarchy the
+instantiator is etc.  When an instantiator gets removed, this
+gu*ILLEGIBLE* values report not attached.  Somehow that gets
+communicated back to the image instance in the cache.  So somehow or
+other, the image instance in the cache knows who's using them and so
+when you go and keep updating the slider value, by simply modifying an
+instantiator, which efficiently changes the internal structure of this
+specifier - eventually image instantiate notices that the image
+instance it points has no other user and just modifiers it,  but in
+complex situations, some optimizations get lost, but everything is
+still correct.
+
+vs.
+
+Andy's set-image-instance-property, which achieves the same
+optimizations much more easily, but
+
+@enumerate
+@item
+falls apart in any more complicated system
+
+@item
+only works because of the way the caching system in XEmacs works.
+Any change (e.g. @strong{ILLEGIBLE} more of making the caches GQ instead
+of GQ) is likely to make things stop working right in all but the
+simplest situation.
+@end enumerate
+
+@subheading Specifier improvements for support of specifier inheritance (necessary for the new font mapping API)
+
+'Fallback should be a locale/domain.
+
+@example
+(get-specifier specifier &optional locale)
+
+#### If locale is omitted, should it be (current-buffer) or 'global?
+#### Should argument not be optional?
+@end example
+
+If a buffer is specified: find a window showing buffer by looking
+
+@itemize @bullet
+@item
+at selected window
+@item
+at other windows on selected frame
+@item
+at selected windows on other frames in selected device
+@item
+at other windows on ""
+@item
+at selected windows on selected frames on other devices in selected
+console.
+@item
+other windows sel from other devices sel con
+@item
+""       oth       ""           sel
+@item
+sel win sel from sel dev oth con
+@item
+oth win sel from sel dev oth con
+@item
+sel win oth from sel dev oth con
+@item
+oth win oth from sel dev oth con
+@item
+sel win sel from oth dev oth con
+@item
+oth win sel from oth dev oth con
+@item
+oth win oth from oth dev oth con
+@end itemize
+
+If none, use  buffer -> sel from -> etc.
+
+@example
+Returns multiple values
+  second is instantiator
+  third  is locale containing inst.
+  fourth is tag set
+
+(restart-specifier-instance ...)
+@end example
+
+like specifier-instance, but allows restarting the lookup, for
+implementing inheritance, etc.  Obsoletes
+specifier-matching-find-charset, or whatever it is.  The restart
+argument is opaque, and is returned as a multiple value of
+restart-specifier-instance.  (It's actually an integer with the low
+bits holding the locale and the other bits count int to the list)
+attached to the locale.)
+
+@node Future Work -- Display Tables, Future Work -- Making Elisp Function Calls Faster, Future Work -- Specifiers, Future Work
+@section Future Work -- Display Tables
+@cindex future work, display tables
+@cindex display tables, future work
+
+#### It would also be really nice if you could specify that the
+characters come out in hex instead of in octal.  Mule does that by
+adding a @code{ctl-hexa} variable similar to @code{ctl-arrow}, but
+that's bogus -- we need a more general solution.  I think you need to
+extend the concept of display tables into a more general conversion
+mechanism.  Ideally you could specify a Lisp function that converts
+characters, but this violates the Second Golden Rule and besides would
+make things way way way way slow.
+
+So instead, we extend the display-table concept, which was historically
+limited to 256-byte vectors, to one of the following:
+
+@enumerate
+@item
+A 256-entry vector, for backward compatibility;
+@item
+char-table, mapping characters to values;
+@item
+range-table, mapping ranges of characters to values;
+@item
+a list of the above.
+@end enumerate
+
+The fourth option allows you to specify multiple display tables instead
+of just one.  Each display table can specify conversions for some
+characters and leave others unchanged.  The way the character gets
+displayed is determined by the first display table with a binding for
+that character.  This way, you could call a function
+@code{enable-hex-display} that adds a hex display-table to the list of
+display tables for the current buffer.
+
+#### ...not yet implemented...  Also, we extend the concept of "mapping"
+to include a printf-like spec.  Thus you can make all extended
+characters show up as hex with a display table like this:
+
+@example
+    #s(range-table data ((256 524288) (format "%x")))
+@end example
+
+Since more than one display table is possible, you have
+great flexibility in mapping ranges of characters.
+
+@uref{../../www.666.com/ben/default.htm,Ben Wing}
+
+@node Future Work -- Making Elisp Function Calls Faster, Future Work -- Lisp Engine Replacement, Future Work -- Display Tables, Future Work
+@section Future Work -- Making Elisp Function Calls Faster
+@cindex future work, making Elisp function calls faster
+@cindex making Elisp function calls faster, future work
+
+@strong{Abstract: }This page describes many optimizations that can be
+made to the existing Elisp function call mechanism without too much
+effort.  The most important optimizations can probably be implemented
+with only a day or two of work.  I think it's important to do this work
+regardless of whether we eventually decide to replace the Lisp engine.
+
+Many complaints have been made about the speed of Elisp, and in
+particular about the slowness in executing function calls, and rightly
+so.  If you look at the implementation of the @code{funcall} function,
+you'll notice that it does an incredible amount of work.  Now logically,
+it doesn't need to be so.  Let's look first from the theoretical
+standpoint at what absolutely needs to be done to call a Lisp function.
+
+First, let's look at the situation that would exist if we were smart
+enough to have made lexical scoping be the default language policy.  We
+know at compile time exactly which code can reference the variables that
+are the formal parameters for the function being called (specifically,
+only the code that is part of that function's definition) and where
+these references are.  As a result, we can simply push all the values of
+the variables onto a stack, and convert all the variable references in
+the function definition into stack references.  Therefore, binding
+lexically-scoped parameters in preparation for a function call involves
+nothing more than pushing the values of the parameters onto a stack and
+then setting a new value for the frame pointer, at the same time
+remembering the old one.  Because the byte-code interpreter has a
+stack-based architecture, however, the parameter values have already
+been pushed onto the stack at the time of the function call invocation.
+Therefore, binding the variables involves doing nothing at all, other
+than dealing with the frame pointer.
+
+With dynamic scoping, the situation is somewhat more complicated.
+Because the parameters can be referenced anywhere, and these references
+cannot be located at compile time, their values have to be stored into a
+global table that maps the name of the parameter to its current value.
+In Elisp, this table is called the @dfn{obarray}.  Variable binding in
+Elisp is done using the C function @code{specbind()}. (This stands for
+"special variable binding" where @dfn{special} is the standard Lisp
+terminology for a dynamically-scoped variable.)  What @code{specbind()}
+does, essentially, is retrieve the old value of the variable out of the
+obarray, remember the value by pushing it, along with the name of the
+variable, onto what's called the @dfn{specpdl} stack, and then store the
+new value into the obarray.  The term "specpdl" means @dfn{Special
+Variable Pushdown List}, where @dfn{Pushdown List} is an archaic computer
+science term for a stack that used to be popular at MIT.  These binding
+operations, however, should still not take very much time because of the
+use of symbols, i.e. because the location in the obarray where the
+variable's value is stored has already been determined (specifically, it
+was determined at the time that the byte code was loaded and the symbol
+created), so no expensive hash table lookups need to be performed.
+
+An actual function invocation in Elisp does a great deal more work,
+however, than was just outlined above.  Let's just take a look at what
+happens when one byte-compiled function invokes another byte-compiled
+function, checking for places where unnecessary work is being done and
+determining how to optimize these places.
+
+@enumerate
+@item 
+
+The byte-compiled function's parameter list is stored in exactly the
+format that the programmer entered it in, which is to say as a Lisp
+list, complete with @code{&amp;optional} and @code{&amp;rest} keywords.
+This list has to be parsed for @emph{every} function invocation, which
+means that for every element in a list, the element is checked to see
+whether it's the @code{&amp;optional} or @code{&amp;rest} keywords, its
+surrounding cons cell is checked to make sure that it is indeed a cons
+cell, the @code{QUIT} macro is called, etc.  What should be happening
+here is that the argument list is parsed exactly once, at the time that
+the byte code is loaded, and converted into a C array.  The C array
+should be stored as part of the byte-code object.  The C array should
+also contain, in addition to the symbols themselves, the number of
+required and optional arguments.  At function call time, the C array can
+be very quickly retrieved and processed.
+@item 
+
+For every variable that is to be bound, the @code{specbind()} function
+is called.  This actually does quite a lot of things, including:
+
+@enumerate
+@item 
+
+Checking the symbol argument to the function to make sure it's actually
+a symbol.
+@item 
+
+Checking for specpdl stack overflow, and increasing its size as
+necessary.
+@item 
+
+Calling @code{symbol_value_buffer_local_info()} to retrieve buffer local
+information for the symbol, and then processing the return value from
+this function in a series of if statements.
+@item 
+
+Actually storing the old value onto the specpdl stack.
+@item 
+
+Calling @code{Fset()} to change the variable's value.
+
+@end enumerate
+
+
+@end enumerate
+
+
+
+The entire series of calls to @code{specbind()} should be inline and
+merged into the argument processing code as a single tight loop, with no
+function calls in the vast majority of cases.  The @code{specbind()}
+logic should be streamlined as follows:
+
+@enumerate
+@item 
+
+The symbol argument type checking is unnecessary.
+@item 
+
+The check for the specpdl stack overflow needs to be done only once, not
+once per argument.
+@item 
+
+All of the remaining logic should be boiled down as follows:
+
+@enumerate
+@item 
+
+Retrieve the old value from the symbol's value cell.
+@item 
+
+If this value is a symbol-value-magic object, then call the real
+@code{specbind()} to do the work.
+@item 
+
+Otherwise, we know that nothing complicated needs to be done, so we
+simply push the symbol and its value onto the specpdl stack, and then
+replace the value in the symbol's value cell.
+@item 
+
+The only logic that we are omitting is the code in @code{Fset()} that
+checks to make sure a constant isn't being set.  These checks should be
+made at the time that the byte code for the function is loaded and the C
+array of parameters to the function is created.  (Whether a symbol is
+constant or not is generally known at XEmacs compile time.  The only
+issue here is with symbols whose names begin with a colon.  These
+symbols should simply be disallowed completely as parameter names.)
+
+@end enumerate
+
+
+@end enumerate
+
+
+
+Other optimizations that could be done are:
+
+@itemize
+@item 
+
+At the beginning of the function that implements the byte-code
+interpreter (this is the Lisp primitive @code{byte-code}), the string
+containing the actual byte code is converted into an array of integers.
+I added this code specifically for MULE so that the byte-code engine
+didn't have to deal with the complexities of the internal string format
+for text.  This conversion, however, is generally useful because on
+modern processors accessing 32-bit values out of an array is
+significantly faster than accessing unaligned 8-bit values.  This
+conversion takes time, though, and should be done once at load time
+rather than each time the byte code is executed.  This array should be
+stored in the byte-code object.  Currently, this is a bit tricky to do,
+because @code{byte-code} is not actually passed the byte-code object,
+but rather three of its elements.  We can't just change @code{byte-code}
+so that it is directly passed the byte-code object because this
+function, with its existing argument calling pattern, is called directly
+from compiled Elisp files.  What we can and should do, however, is
+create a subfunction that does take a byte-code object and actually
+implements the byte-code interpreter engine.  Whenever the C code wants
+to execute byte code, it calls this subfunction.  @code{byte-code}
+itself also calls this subfunction after conjuring up an appropriate
+byte-code object and storing its arguments into this object.  With a
+small amount of work, it's possible to do this conjuring in such a way
+that it doesn't generate any garbage.
+@item 
+
+At the end of a function call, the parameter bindings that have been
+done need to be undone.  This is standardly done by calling
+@code{unbind_to()}.  Just as for a @code{specbind()}, this function does
+a lot of work that is unnecessary in the vast majority of cases, and it
+could also be inlined and streamlined.
+@item 
+
+As part of each Elisp function call, a whole bunch of checks are done
+for a series of unlikely but possible conditions that may occur.  These
+include, for example,
+
+@itemize
+@item 
+
+Calling the @code{QUIT} macro, which essentially involves
+checking a global volatile variable to see whether additional processing
+needs to be done.
+@item 
+
+Checking whether a garbage collection needs to be done.
+@item 
+
+Checking the variable @code{debug_on_next_call}.
+@item 
+
+Checking for whether Elisp profiling is active.  (An additional
+optimization that's perhaps not worth the effort is to do some
+post-processing on the array of integers after it has been converted.
+For example, whenever a 16-bit value occurs in the byte code, it has
+to be encoded as two separate 8-bit values.  These values could be
+combined.  The tricky part here is that all of the places where a goto
+occurs across the place where this modification is made would have to
+have their offsets changed.  Other such optimizations can easily be
+imagined as well.)
+
+@end itemize
+
+@item 
+
+With a little bit smarter code, it should be possible to make a
+single trip variable that indicates whether any of these conditions is
+true.  This variable would be updated by any code that changes the
+actual variables whose values are checked in the various checks just
+mentioned.  (By the way, all of this is occurring in the C function
+@code{funcall_recording_as()}.)  There is a little bit of code
+between each of the checks.  This code would simply have to be
+duplicated between the two cases where this general trip variable is
+true and is false.  (Note: the optimization detailed in this item is
+probably not worth doing on the first pass.)
+
+@end itemize
+
+@uref{../../www.666.com/ben/default.htm,Ben Wing}
+
+@node Future Work -- Lisp Engine Replacement,  , Future Work -- Making Elisp Function Calls Faster, Future Work
+@section Future Work -- Lisp Engine Replacement
+@cindex future work, lisp engine replacement
+@cindex lisp engine replacement, future work
+
+@menu
+* Future Work -- Lisp Engine Discussion::  
+* Future Work -- Lisp Engine Replacement -- Implementation::  
+@end menu
+
+@node Future Work -- Lisp Engine Discussion, Future Work -- Lisp Engine Replacement -- Implementation, Future Work -- Lisp Engine Replacement, Future Work -- Lisp Engine Replacement
+@subsection Future Work -- Lisp Engine Discussion
+@cindex future work, lisp engine discussion
+@cindex lisp engine discussion, future work
+
+
+@strong{Abstract: }Recently there has been a great deal of talk on the
+XEmacs mailing lists about potential changes to the XEmacs Lisp engine.
+Usually the discussion has centered around the question which is better,
+Common Lisp or Scheme?  This is certainly an interesting debate topic,
+but it didn't seem to have much practical relevance to me, so I vowed to
+stay out of the discussion.  Recently, however, it seems that people are
+losing sight of the broader picture.  For example, nobody seems to be
+asking the question, ``"Would an extension language other than Lisp or
+Scheme (perhaps not a Lisp variant at all) be more appropriate?"'' Nor
+does anybody seem to be addressing what I consider to be the most
+fundamental question, is changing the extension language a good thing to
+do?
+
+I think it would be a mistake at this point in XEmacs development to
+begin any project involving fundamental changes to the Lisp engine or to
+the XEmacs Lisp language itself.  It would take a huge amount of effort
+to complete even part of this project, and would be a major drain on the
+already-insufficient resources of the XEmacs development community.
+Most of the gains that are purported to stem from a project such as this
+could be obtained with far less effort by making more incremental
+changes to the XEmacs core.  I think it would be an even bigger mistake
+to change the actual XEmacs extension language (as opposed to just
+changing the Lisp engine, making few, if any, externally visible
+changes).  The only language change that I could possibly imagine
+justifying would involve switching to some ubiquitous web language, such
+as Java and JavaScript, or Perl.  (Even among those, I think Java would
+be the only possibility that really makes sense).
+
+In the rest of this document I'll present the broader issues that would
+be involved in changing the Lisp engine or extension language.  This
+should make clear why I've come to believe as I do.
+
+@subheading Is everyone clear on the difference between interface and implementation?
+
+There seems to be a great deal of confusion concerning the difference
+between interface and implementation.  In the context of XEmacs,
+changing the interface means switching to a different extension language
+such as Common Lisp, Scheme, Java, etc.  Changing the implementation
+means using a different Lisp engine.  There is obviously some relation
+between these two issues, but there is no particular requirement that
+one be changed if the other is changed.  It is quite possible, for
+example, to imagine taking the underlying engine for any of the various
+Lisp dialects in existence, and adapting it so that it implements the
+same Elisp extension language that currently exists.  The vast majority
+of the purported benefits that we would get from changing the extension
+language could just as easily be obtained while making minimal changes
+to the external Elisp interface.  This way nearly all existing Elisp
+programs would continue to work, there would be no need to translate
+Elisp programs into some other language or to simultaneously support two
+incompatible Lisp variants, and there would be no need for users or
+package authors to learn a new extension language that would be just as
+unfamiliar to the vast majority of them as Elisp is.
+
+@subheading Why should we change the Lisp engine?
+
+Let's go over the possible reasons for changing the Lisp engine.
+
+@subsubheading Speed.
+
+Changing the Lisp engine might make XEmacs faster.  However,
+consider the following.
+
+@enumerate
+@item           
+
+XEmacs will get faster over time without any development effort at all
+because computers will get faster.
+@item           
+
+Perhaps the biggest causes of the slowness of XEmacs are not related to
+the Lisp engine at all.  It has been asserted, for example, that the
+slowness of XEmacs is primarily due to the redisplay mechanism, to the
+handling of insertion and deletion of text in a buffer, to the event
+loop, etc.  Nobody has done any real studies to determine what the
+actual cause of slowness is.
+@item           
+
+Emacs 18 seems plenty fast enough to most people.  However, Emacs 18
+also had a worse Lisp engine and a worse byte compiler than XEmacs.
+@item           
+
+Significant speed increases in the execution of Lisp code could be
+achieved without too much effort by working on the existing byte code
+interpreter and function call mechanism a bit.
+
+@end enumerate
+
+@subsubheading Memory usage.
+
+A new Lisp engine with a better garbage collection mechanism might make
+more efficient use of memory; for example, through the use of a
+relocating garbage collector.  However, consider this:
+
+@enumerate
+@item           
+
+A new Lisp engine would probably have a larger memory footprint, perhaps
+a significantly larger one.
+@item           
+
+The worst memory problems might not be due to Lisp object inefficiency
+at all.  The problems could simply be due mainly to the inefficient
+buffer representation.  Nobody has come up with any concrete numbers on
+where the real problem lies.
+
+@end enumerate
+
+@subsubheading Robustness.
+
+A new Lisp engine might well be more robust.  (On the other hand, it
+might not be.  It is not always easy to tell).  However, I think that
+the biggest problems with robustness are in the part of the C code that
+is not concerned with implementing the Lisp engine.  The redisplay
+mechanism and the unexec mechanism are probably the biggest sources of
+robustness problems.  I think the biggest robustness problems that are
+related to the Lisp engine concern the use of GCPRO declarations.  The
+entire GCPRO mechanism is ill-conceived and unsafe.  The only real way
+to make this safe would be to do conservative garbage collection over
+the C stack and to eliminate the GCPRO declarations entirely.  But how
+many of the Lisp engines that are being considered have such a mechanism
+built into them?
+
+
+@subsubheading Maintainability.
+
+A new Lisp engine might well improve the maintainability of XEmacs by
+offloading the maintenance of the Lisp engine.  However, we need to make
+very sure that this is, in fact, the case before embarking on a project
+like this.  We would almost certainly have to make significant
+modifications to any Lisp engine that we choose to integrate, and
+without the active and committed support and cooperation of the
+developers of that Lisp engine, the maintainability problem would
+actually get worse.
+
+@subsubheading Features.
+
+A new Lisp engine might have built in support for various features that
+we would like to add to the XEmacs extension language, such as lexical
+scoping and an object system.
+
+@subheading Why would we want to change the extension language?
+
+Possible reasons for changing the extension language include:
+
+@subsubheading More standard. 
+
+Switching to a language that is more standard and more commonly in use
+would be beneficial for various reasons.  First of all, the language
+that is more commonly used and more familiar would make it easier for
+users to write their own extensions and in general, increase the
+acceptance of XEmacs.  Also, an accepted standard probably has had a lot
+more thought put into it than any language interface created by the
+XEmacs developers themselves.  Furthermore, if our extension language is
+being actively developed and supported, much of the work that we would
+otherwise have to do ourselves is transferred elsewhere.
+
+However, both Scheme and Common Lisp flunk the familiarity test.
+Neither language is being actively used for program development outside
+of small research communities, and few prospective authors of XEmacs
+extensions will be familiar with any Lisp variant for real world uses.
+(I consider the argument that Scheme is often used in introductory
+programming courses to be irrelevant.  Many existing programmers were
+taught Pascal in their introductory programming courses.  How many of
+them would actually be comfortable writing a program in Pascal?)
+Furthermore, someone who wants to learn Lisp can't exactly go to their
+neighborhood bookstore and pick up a book on this topic.
+
+@subsubheading Ease of use.
+
+There are endless arguments about which language is easiest to use.  In
+practice, this largely boils down to which languages are most familiar.
+
+@subsubheading Object oriented.
+
+The object-oriented paradigm is the dominant one in use today for new
+languages.  User interface concepts in particular are expressed very
+naturally in an object-oriented system.  However, neither Scheme nor
+Common Lisp has been designed with object orientation in mind.  There is
+a standard object system for Common Lisp, but it is extremely complex
+and difficult to understand.
+
+
+@uref{../../www.666.com/ben/default.htm,Ben Wing}
+
+
+@node Future Work -- Lisp Engine Replacement -- Implementation,  , Future Work -- Lisp Engine Discussion, Future Work -- Lisp Engine Replacement
+@subsection Future Work -- Lisp Engine Replacement -- Implementation
+@cindex future work, lisp engine replacement, implementation
+@cindex lisp engine replacement, implementation, future work
+
+Let's take a look at the sort of work that would be required if we were
+to replace the existing Elisp engine in XEmacs with some other engine,
+for example, the Clisp engine.  I'm assuming here, of course, that we
+are not going to be changing the interface here at the same time, which
+is to say that we will be keeping the same Elisp language that we
+currently have as the extension language for XEmacs, except perhaps for
+incremental changes that we will make, such as lexical scoping and
+proper structure support in an attempt to gradually move the language
+towards an upwardly-compatible goal, such as Common Lisp.  I am writing
+this page primarily as food for thought.  I feel fairly strongly that
+actually doing this work would be a big waste of effort that would
+inevitably become a huge time sink on the part of nearly everyone
+involved in XEmacs development, and not only for the ones who were
+supposed to be actually doing the engine change.  I feel that most of
+the desired changes that we want for the language and/or the engine can
+be achieved with much less effort and time through incremental changes
+to the existing code base.
+
+First of all, in order to make a successful Lisp engine change in
+XEmacs, it is vitally important that the work be done through a series
+of incremental stages where at the end of each stage XEmacs can be
+compiled and run, and it works.  It is tempting to try to make the
+change all at once, but this would be disastrous.  If the resulting
+product worked at all, it would inevitably contain a huge number of
+subtle and extremely difficult to track down bugs, and it would be next
+to impossible to determine which of the myriad changes made introduced
+the bug.
+
+Now let's look at what the possible stages of implementation could be.
+
+@subsubheading An Extra C Preprocessing Stage
+
+The first step would be to introduce another preprocessing stage for the
+XEmacs C code, which is done before the C compiler itself is invoked on
+the code, and before the standard C preprocessor runs.  The C
+preprocessor is simply not powerful enough to do many of the things we
+would like to do in the C code.  The existing results of this have been
+a combination of a lot of hacked up and tricky-to-maintain stuff (such
+as the @code{DEFUN} macro, and the associated @code{DEFSUBR}), as well
+as code constructs that are difficult to write.  (Consider for example,
+attempting to do structured exception handling, such as catch/throw and
+unwind-protect constructs), as well as code that is potentially or
+actually unsafe (such as the uses of @code{alloca}), which could easily
+cause stack overflow with large amounts of memory allocated in this
+fashion.)  The problem is that the C preprocessor does not allow macros
+to have the power of an actual language, such as C or Lisp.  What our
+own preprocessor should do is allow us to define macros, whose
+definitions are simply functions written in some language which are
+executed at compile time, and whose arguments are the actual argument
+for the macro call, as well as an environment which should have a data
+structure representation of the C code in the file and allow this
+environment to be queried and modified.  It can be debated what the
+language should be that these extensions are written in.  Whatever the
+language chosen, it needs to be a very standard language and a language
+whose compiler or interpreter is available on all of the platforms that
+we could ever possibly consider putting XEmacs to, which is basically to
+say all the platforms in existence.  One obvious choice is C, because
+there will obviously be a C compiler available, because it is needed to
+compile XEmacs itself.  Another possibility is Perl, which is already
+installed on most systems, and is universally available on all others.
+This language has powerful text processing facilities which would
+probably make it possible to implement the macro definitions more
+quickly and easily; however, this might also encourage bad coding
+practices in the macros (often simple text processing is not
+appropriate, and more sophisticated parsing or recursive data structure
+processing needs to be done instead), and we'd have to make sure that
+the nested data structure that comprises the environment could be
+represented well in Perl.  Elisp would not be a good choice because it
+would create a bootstrapping problem.  Other possible languages, such as
+Python, are not appropriate, because most programmers are unfamiliar
+with this language (creating a maintainability problem) and the Python
+interpreter would have to be included and compiled as part of the XEmacs
+compilation process (another maintainability problem).  Java is still
+too much in flux to be considered at this point.
+
+The macro facility that we will provide needs to add two features to the
+language: the ability to define a macro, and the ability to call a
+macro.  One good way of doing this would be to make use of special
+characters that have no meaning in the C language (or in C++ for that
+matter), and thus can never appear in a C file outside of comments and
+strings.  Two obvious characters are the @@ sign and the $ sign.  We
+could, for example, use @code{@@} defined to define new macros, and the
+@code{$} sign followed by the macro name to call a macro.  (Proponents
+of Perl will note that both of these characters have a meaning in Perl.
+This should not be a problem, however, because the way that macros are
+defined and called inside of another macro should not be through the use
+of any special characters which would in effect be extending the macro
+language, but through function calls made in the normal way for the
+language.)
+
+The program that actually implements this extra preprocessing stage
+needs to know a certain amount about how to parse C code.  In
+particular, it needs to know how to recognize comments, strings,
+character constants, and perhaps certain other kinds of C tokens, and
+needs to be able to parse C code down to the statement level.  (This is
+to say it needs to be able to parse function definitions and to separate
+out the statements, @code{if} blocks, @code{while} blocks, etc. within
+these definitions.  It probably doesn't, however need to parse the
+contents of a C expression.)  The preprocessing program should work
+first by parsing the entire file into a data structure (which may just
+contain expressions in the form of literal strings rather than a data
+structure representing the parsed expression).  This data structure
+should become the environment parameter that is passed as an argument to
+macros as mentioned above.  The implementation of the parsing could and
+probably should be done using @code{lex} and @code{yacc}.  One good idea
+is simply to steal some of the @code{lex} and @code{yacc} code that is
+part of GCC.
+
+Here are some possibilities that could be implemented as part of the
+preprocessing:
+
+@enumerate
+@item 
+
+A proper way of doing the @code{DEFUN} macros.  These could, for
+example, take an argument list in the form of a Lisp argument list
+(complete with keyword parameters and other complex features) and
+automatically generate the appropriate @code{subr} structure, the
+appropriate C function definition header, and the appropriate call to
+the @code{DEFSUBR} initialization function.
+@item 
+
+A truly safe and easy to use implementation of the @code{alloca}
+function.  This could allocate the memory in any fashion it chooses
+(calling @code{malloc} using a large global array, or a series of such
+arrays, etc.) an @code{insert} in the appropriate places to
+automatically free up this memory.  (Appropriate places here would be at
+the end of the function and before any return statements.  Non-local
+exits can be handled in the function that actually implements the
+non-local exit.)
+@item 
+
+If we allow for the possibility of having an arbitrary Lisp engine, we
+can't necessarily assume that we can call Lisp primitives implemented in
+C from other C functions by simply making a function all.  Perhaps
+something special needs to happen when this is done.  This could be
+handled fairly easily by having our new and improved @code{DEFUN} macro
+define a new macro for use when calling a primitive.
+@end enumerate
+
+
+@subsubheading Make the Existing Lisp Engine be Self-contained.
+
+The goal of this stage is to gradually build up a self-contained Lisp
+engine out of the existing XEmacs core, which has no dependencies on any
+of the code elsewhere in the XEmacs core, and has a well-defined and
+black box-style interface.  (This is to say that the rest of the C code
+should not be able to access the implementation of the Lisp engine, and
+should make as few assumptions as possible about how this implementation
+works).  The Lisp engine could, and probably should, be built up as a
+separate library which can be compiled on its own without any of the
+rest of the XEmacs C code, and can be tested in this configuration as
+well.
+
+The creation of this engine library should be done as a series of
+subsets, each of which moves more code out of the XEmacs core and into
+the engine library, and XEmacs should be compilable and runnable between
+each sub-step.  One possible series of sub-steps would be to first
+create an engine that does only object allocation and garbage
+collection, then as a second sub-step, move in the code that handles
+symbols, symbol values, and simple binding, and then finally move in the
+code that handles control structures, function calling, @code{byte-code}
+execution, exception handling, etc.  (It might well be possible to
+further separate this last sub-step).
+
+@subsubheading Removal of Assumptions About the Lisp Engine Implementation
+
+Currently, the XEmacs C code makes all sorts of assumptions about the
+implementation of the Lisp engine, particularly in the areas of object
+allocation, object representation, and garbage collection.  A different
+Lisp engine may well have different ways of doing these implementations,
+and thus the XEmacs C code must be rid of any assumptions about these
+implementations.  This is a tough and tedious job, but it needs to be
+done.  Here are some examples:
+
+@enumerate
+@item 
+
+@code{GCPRO} must go.  The @code{GCPRO} mechanism is tedious,
+error-prone, unmaintainable, and fundamentally unsafe.  As anyone who
+has worked on the C Core of XEmacs knows, figuring out where to insert
+the @code{GCPRO} calls is an exercise in black magic, and debugging
+crashes as a result of incorrect @code{GCPROing} is an absolute
+nightmare.  Furthermore, the entire mechanism is fundamentally unsafe.
+Even if we were to use the extra preprocessing stage detailed above to
+automatically generate @code{GCPRO} and @code{UNGCPRO} calls for all
+Lisp object variables occurring anywhere in the C code, there are still
+places where we could be bitten.  Consider, for example, code which
+calls @code{cons} and where the two arguments to this functions are both
+calls to the @code{append} function.  Now the @code{append} function
+generates new Lisp objects, and it also calls @code{QUIT}, which could
+potentially execute arbitrary Lisp code and cause a garbage collection
+before returning control to the @code{append} function.  Now in order to
+generate the arguments to the @code{cons} function, the @code{append}
+function is called twice in a row.  When the first @code{append} call
+returns, new Lisp data has been created, but has no @code{GCPRO}
+pointers to it.  If the second @code{append} call causes a garbage
+collection, the Lisp data from the first @code{append} call will be
+collected and recycled, which is likely to lead to obscure and
+impossible-to-debug crashes.  The only way around this would be to
+rewrite all function calls whose parameters are Lisp objects in terms of
+temporary variables, so that no such function calls ever contain other
+function calls as arguments.  This would not only be annoying to
+implement, even in a smart preprocessor, but would make the C code
+become incredibly slow because of all the constant updating of the
+@code{GCPRO} lists.
+@item 
+
+The only proper solution here is to completely do away with the
+@code{GCPRO} mechanism and simply do conservative garbage collection
+over the C stack.  There are already portable implementations of
+conservative pointer marking over the C stack, and these could easily be
+adapted for use in the Elisp garbage collector.  If, as outlined above,
+we use an extra preprocessing stage to create a new version of
+@code{alloca} that allocates its memory elsewhere than actually on the C
+stack, and we ensure that we don't declare any large arrays as local
+variables, but instead use @code{alloca}, then we can be guaranteed that
+the C stack is small and thus that the conservative pointer marking
+stage will be fast and not very likely to find false matches.
+@item 
+
+Removing the @code{GCPRO} declarations as just outlined would also
+remove the assumption currently made that garbage collection can occur
+only in certain places in the C code, rather than in any arbitrary spot.
+(For example, any time an allocation of Lisp data happens).  In order to
+make things really safe, however, we also have to remove another
+assumption as detailed in the following item.
+@item 
+
+Lisp objects might be relocatable.  Currently, the C code assumes that
+Lisp objects other than string data are not relocatable and therefore
+it's safe to pass around and hold onto the actual pointers for the C
+structures that implement the Lisp objects.  Current code, for example,
+assumes that a @code{Lisp_Object} of type buffer and a C pointer to a
+@code{struct buffer} mean basically the same thing, and indiscriminately
+passes the two kinds of buffer pointers around.  With relocatable Lisp
+objects, the pointers to the C structures might change at any time.
+(Remember, we are now assuming that a garbage collection can happen at
+basically any point).  All of the C code needs to be changed so that
+Lisp objects are always passed around using a Lisp object type, and the
+underlying pointers are only retrieved at the time when a particular
+data element out of the structure is needed.  (As an aside, here's
+another reason why Lisp objects, instead of pointers, should always be
+passed around.  If pointers are passed around, it's conceivable that at
+the time a garbage collection occurs, the only reference to a Lisp
+object (for example, a deleted buffer) would be in the form of a C
+pointer rather than a Lisp object.  In such a case, the conservative
+pointer marking mechanism might not notice the reference, especially if,
+in an attempt to eliminate false matches and make the code generally
+more efficient, it will be written so that it will look for actual Lisp
+object references.)
+@item 
+
+I would go a step farther and completely eliminate the macros that
+convert a Lisp object reference into a C pointer.  This way the only way
+to access an element out of a Lisp object would be to use the macro for
+that element, which in one atomic operation de-references the Lisp
+object reference and retrieves the value contained in the element.  We
+probably do need the ability to retrieve actual C pointers, though.  For
+example, in the case where an array is stored in a Lisp object, or
+simply for efficiency purposes where we might want some code to retrieve
+the C pointer for a Lisp object, and work on that directly to avoid a
+whole bunch of extra indirections.  I think the way to do this would be
+through the use of a special locking construct implemented as part of
+the extra preprocessor stage mentioned above.  This would essentially be
+what you might call a @dfn{lock block}, just like a @code{while} block.
+You'd write the word @code{lock} followed by a parenthesized expression
+that retrieves the C pointer and stores it into a variable that is
+scoped only within the lock block and followed in turn by some code in
+braces, which is the actual code associated with the lock block, and
+which can make use of this pointer.  While the code inside the lock
+block is executing, that particular pointer and the object pointed to by
+it is guaranteed not to be relocated.
+@item 
+
+If all the XEmacs C code were converted according to these rules, there
+would be no restrictions on the sorts of implementations that can be
+used for the garbage collector.  It would be possible, for example, to
+have an incremental asynchronous relocating garbage collector that
+operated continuously in another thread while XEmacs was running.
+@item 
+
+The C implementation of Lisp objects might not, and probably should not,
+be visible to the rest of the XEmacs C code.  It should theoretically be
+possible, for example, to implement Lisp objects entirely in terms of
+association lists, rather than using C structures in the standard way.
+(This may be an extreme example, but it's good to keep in mind an
+example such as this when cleaning up the XEmacs C code).  The changes
+mentioned in the previous item would go a long way towards removing this
+assumption.  The only places where this assumption might still be made
+would be inside of the lock blocks where an actual pointer is retrieved.
+(Also, of course, we'd have to change the way that Lisp objects are
+defined in C so that this is done with some function calls and new and
+improved macros rather than by having the XEmacs C code actually define
+the structures.  This sort of thing would probably have to be done in
+any case once the allocation mechanism is moved into a separate
+library.)  With some thought it should be possible to define the lock
+block interface in such a way as to remove any assumptions about the
+implementation of Lisp objects.
+@item 
+
+C code may not be able to call Lisp primitives that are defined in C
+simply by making standard C function calls.  There might need to be some
+wrapper around all such calls.  This could be achieved cleanly through
+the extra preprocessing step mentioned above, in line with the example
+described there.
+
+@end enumerate
+
+@subsubheading Actually Replacing the Engine.
+
+Once we've done all of the work mentioned in the previous steps (and
+admittedly, this is quite a lot of work), we should have an XEmacs that
+still uses what is essentially the old and previously existing Lisp
+engine, but which is ready to have its Lisp engine replaced.  The
+replacement might proceed as follows:
+
+@enumerate
+@item 
+
+Identify any further changes that need to be made to the engine
+interface that we have defined as a result of the previous steps so that
+features and idiosyncrasies of various Lisp engines that we examine
+could be properly supported.
+@item 
+
+Pick a Lisp engine and write an interface layer that sits on top of this
+Lisp engine and makes it adhere to what I'll now call the XEmacs Lisp
+engine interface.
+@item 
+
+Strongly consider creating, if we haven't already done so, a test suite
+that can test the XEmacs Lisp engine interface when used with a
+stand-alone Lisp engine.
+@item 
+
+Test the hell out of the Lisp engine that we've chosen when combined
+with its XEmacs Lisp engine interface layer as a stand-alone program.
+@item 
+
+Now finally attach this stand-alone program to XEmacs itself.  Debug and
+fix any further problems that ensue (and there inevitably will be such
+problems), updating the test suite as we go along so that if it were run
+again on the old and buggy interfaced Lisp engine, it would note the
+bug.
+
+@end enumerate
+
+
+@uref{../../www.666.com/ben/default.htm,Ben Wing}
+
+@node Future Work Discussion, Old Future Work, Future Work, Top
+@chapter Future Work Discussion
+@cindex future work, discussion
+@cindex discussion, future work
+
+This chapter includes (mostly) email discussions about particular design
+issues, edited to include only relevant and useful stuff.  Ideally over
+time these could be condensed down to a single design document to go
+into the normal Future Work section.
+
+@menu
+* Discussion -- garbage collection::  
+* Discussion -- glyphs::        
+@end menu
+
+@node Discussion -- garbage collection, Discussion -- glyphs, Future Work Discussion, Future Work Discussion
+@section Discussion -- garbage collection
+@cindex discussion, garbage collection
+@cindex garbage collection, discussion
+
+
+@example
+On Tue, Oct 12, 1999 at 03:36:59AM -0700, Ben Wing wrote:
+@end example
+
+So what am I missing here?
+
+@example
+In response, Olivier Galibert wrote:
+@end example
+
+Two things:
+@enumerate
+@item
+The purespace is gone
+
+I  mean  absolutely, completely and utterly  removed.   Fpurecopy is a
+no-op now (and  have been for  some time).  Readonly objects  are gone
+too.  Having  less checks to  do in Fsetcar,  Fsetcdr,  Faset and some
+others  is probably a  good thing, speedwise.  I  have it removed some
+time ago because it  does not make  sense when using a portable dumper
+to copy data in a special area of the memory at dump time and I wanted
+to be  sure that supressing the copying  from Fpurecopy wouldn't break
+things.
+
+Now, we want to get the post-dumping data sharing back, of course.  In
+today systems,  it is  quite   easy: you just   have  to map the  file
+MAP_PRIVATE and avoid writing to the subset of  pages you want to keep
+shared.   Copy-on-write does  the job for  you.  It  has the nice side
+effect of  completely avoiding bus  errors due  to trying to  write to
+readonly memory zones.
+
+Avoiding writing to the "pure" objects themselves  is already done, of
+course.  Would lisp code  have written to the  purecopied parts of the
+dumped data that it would have exploded long ago.  So there is nothing
+to do in this area.  So the only remaining thing is  the markbit.  Two
+possible strategies:
+
+@itemize @bullet
+@item
+have Fpurecopy mark somehow the lrecords it would have copied in the
+good old times.  Post-dump, use this mark as a "always marked, don't
+touch, don't look  into, don't free"  flag, the  same way CHECK_PURE
+was used.
+@item
+move the markbit outside of the lrecord.
+@end itemize
+
+
+The second solution is more appealing to me for a bunch of reasons:
+@itemize @bullet
+@item
+more things are shared  than only what  is purecopied (not yet  used
+functions come to mind)
+@item
+no more "the only references to this  non-purecopied object are from
+purecopied objects, XEmacs will self-destruct  in ten seconds"  kind
+of bugs.
+@item
+removing flags  goes   the right   way towards   implementing  Jan's
+allocator ideas.
+@item
+it becomes probably easier to experiment with the GC code
+@end itemize
+
+@item
+Finding all the dumped objects in order to unmark them sucks
+
+Not  having to rebuild  a list of all the  dumped objects  in order to
+find  them all and ensure that  all are unmarked simplifies things for
+me.  Errr, ok, now that I really think of  it, I can rebuild this list
+easily, in fact.  And I'm probably going to have to manage it, since I
+feel like the  lack of calls to  the finalizers for the dumped objects
+is going to someday  turn over and bite me  in the face.  But anyways,
+it makes my life easier for now.
+
+So no,  it's  not a _necessity_.   But  it  helps.  And the  automatic
+sharing of  all objects until  you write  to   them explicitely is,  I
+think, really cool.
+@end enumerate
+
+
+@example
+On 10/12/1999 5:49 PM Ben Wing wrote:
+                                                     
+Subject: Re: hashtable-based marking and cleanups
+@end example
+
+OK, I can see the advantages.  But:
+
+@enumerate
+@item
+There will be an inevitable loss of speed using a large hashtable.  If
+it's large, I say that it's just not worth it.  There are things that are
+so much more important than futzing around with the garbage collector
+(e.g. fixing the god damn user interface), things which if not fixed will
+sooner or later cause XEmacs to die entirely.  If we are causing a major
+slowdown in the name of some not-so-important work that may or may not get
+done, we shouldn't do it.  (On the other hand, if the slowdown is
+negligible, I have no problems with this.)
+
+@item
+I think you should @strong{expand} the concept of read-only objects so
+that @strong{any} object (especially strings and cons cells) can get
+marked read-only by the C code if it wants. (Perhaps you could use the
+now-unused mark bit to hold a read-only flag.) This is important because
+it allows C code to directly return internal lists (e.g. from the
+specifiers and various object property lists) without having to do a
+copy, like is now done (and similarly, potentially to directly accept
+lists from a Lisp call without copying them for internal use, if the
+Lisp caller is made aware that the list might become read-only) -- if
+the copy weren't done and some piece of Lisp code went and modified the
+list, XEmacs might very well crash.  Thus, this read-only flag would be
+a huge efficiency gain in terms of the garbage collection overhead saved
+as well as the speed of copying a large list.  The extra checks in
+@code{Fsetcar()}, etc. for this that you mention are in fact negligible
+in their speed overhead -- one or two instructions -- and these
+functions are not used all that commonly, either.  With the changes I
+have proposed in Architecting XEmacs, the case of returning an internal
+list will become more and more common as the power of the user interface
+would be greatly increased and along with it are lots and lots of lists
+of info that need to be retrievable from Lisp.
+@end enumerate
+
+BTW there is a wonderful book all about garbage collection by Jones and
+Lins.  Ever seen it?
+
+@example
+http://www.amazon.com/exec/obidos/ASIN/0471941484/qid=939775572/sr=1-1/002-3092633-2509405
+@end example
+
+@node Discussion -- glyphs,  , Discussion -- garbage collection, Future Work Discussion
+@section Discussion -- glyphs
+@cindex discussion, glyphs
+@cindex glyphs, discussion
+
+Some comments (not always pretty!) by Ben:
+
+@example
+March 20, 2000
+
+Andy, I use the tab widgets but I've been having lots of problems.
+
+1] Sometimes clicking on them does nothing.
+
+2] There's a design flaw: I frequently use M-C-l to switch to the
+previous buffer.  If I use this in conjunction with the tabs, things get
+all screwed up because selecting a buffer with the tab does not bring it
+to the front of the buffer list, like it should.  It looks like you're
+doing this to avoid having the order of the tabs change, but this is
+wrong: If you don't reorder the buffer list, everything else gets
+screwed up.  If you want the order of the tabs not to change, you need
+to decouple this order from the buffer list order.
+@end example
+
+@example
+March 23, 2000
+
+I'm very confused.  The SIGIO timer is used @strong{only} for C-g.  It has
+nothing to do with any other events.  (sit-for 0) ought to
+
+(1) cause all pending non-command events to get executed, and
+(b) do redisplay
+
+However, sit-for gets preempted by input coming in.
+
+What about (sit-for 0.1)?
+
+I suppose a solution along the lines of dispatch-non-command-events
+might be OK if you've tried everything else and it doesn't work, but i'm
+leery of introducing new Lisp functions to deal with specific problems.
+Pretty soon we end up with a whole bevy of such ill-defined functions,
+like we already have.  I think instead, you should introduce the
+following primitive:
+
+(wait-for-event redisplay &rest event-specs)
+
+Waits for one of the event specifications specified to happen.  Returns
+something about what happened.
+
+REDISPLAY controls the behavior of redisplay during waiting.  Something
+like
+
+- nil (never redisplay),
+- t (redisplay when it seems appropriate), etc.
+
+EVENT-SPECS could be
+
+t                     -- drain all non-user events, and then return
+any-process           -- wait till input or state change on any process
+process               -- wait till input or state change on process
+time                  -- wait till such-and-such time has elapsed
+'user                 -- wait till user event has happened
+'(user predicate)     -- wait till user event matching the predicate has
+                         happened
+'event                -- wait till any event has happened
+'(event predicate)    -- wait till event matching the predicate has happened
+
+The existing functions @code{next-event}, @code{next-command-event},
+@code{accept-process-output}, @code{sit-for}, @code{sleep-for}, etc. could all be
+written in terms of this new command.  You could use this command inside
+of your glyph code to ensure that the events get processed that need do
+in order for widget updates to happen.
+
+But you said something about need a magic event to invoke redisplay?
+Why is that?
+@end example
+
+@example
+April 2, 2000
+
+the internal distinction between "widget" and "layout" is bogus.  there
+exist widgets that do drawing and do layout of their children,
+e.g. group-box widgets and proper tab widgets.  the only sensible
+distinction is between widgets with children and those without children.
+@end example
+
+@example
+April 5, 2000
+
+andy, i'm not sure i really believe that you need to cycle the event
+code to get widgets to redisplay, but in any case you should
+
+@enumerate
+@item
+hide the logic to do this in the c code; the lisp code should do
+nothing other than call (redisplay widget)
+
+@item
+make sure your event-cycling code processes @strong{NO} events at all.  this
+includes non-user events.  queue the events instead.
+@end enumerate
+
+in other words, dispatch-non-command-events must go, and i am proposing
+a general function (redisplay OBJECT) to replace the existing ad-hoc
+functions.
+@end example
+
+@example
+April 6, 2000
+
+the tab widget code should simply be able to create a whole lot of tabs
+without regard to the size of the gutter, and the surrounding layout
+widget (please please make layouts be proper widgets!) should
+automatically map and unmap them as necessary, to fill up the available
+space.  perhaps this already works and what you're doing is just for
+optimization?  but i get the feeling this is not the case.
+@end example
+
+@example
+April 6, 2000
+
+the function make-gutter-only-dialog-frame is bogus.  the use of the
+gutter here to hold widgets is an implementation detail and should not
+be exposed in the interface.  similarly, make-search-dialog should not
+have to do all the futzing that it does.  creating the frame unmapped,
+creating an extent and messing with the gutter: all this stuff should be
+hidden.  you should have a simple function make-dialog-frame that takes
+a dialog specification, and that's all you need to do.
+
+also, these dialog boxes, and this function make-dialog-frame, should
+
+a] be in dialog.el, not gutter-items.el.
+b] when possible, be placed in the interactive spec of standard lisp
+functions rather than accessed directly from menubar-items.el
+c] wrapped in calls to should-use-dialog-box-p, so the user has control
+over when dialog boxes appear.
+@end example
+
+@example
+April 7, 2000
+
+hmmm ...  in that case, the whitespace absolutely needs to be specified
+as properties of the layout widget (e.g. :border-width and
+:border-height), rather than setting an overall size.  you have no idea
+what the correct size should be if the user changes font size or uses
+translations in a different language.
+
+Your modus operandi should be "hardcoded pixel sizes are @strong{always} bad."
+@end example
+
+@example
+April 7, 2000
+
+you mean the number of tabs adjusts, or the size of each tab adjusts (by
+making the font smaller or something)?  if the size of a single tab is
+not related to the total space the tabs can fix into, then it should be
+possible to simply specify as many tabs as exist for buffers, and have
+the layout manager decide how many can fit into the available space.
+this does @strong{not} mean the layout manager will resize the tabs, because
+query-geometry on the tabs should find out that the tabs don't want to
+be any size other than they are.
+
+the point here is that you should not @strong{have} to worry about pixel
+heights and widths @strong{anywhere} in Lisp-level code.  The layout managers
+should take care of everything for you.  The only exceptions may be in
+some text fields, which will be blank by default and you want to specify
+a maximum width (which should be done in 'n' sizes, not in pixels!).
+
+i won't stop complaining until i see nearly every one of those
+pixel-width and pixel-height parameters gone, and the remaining ones
+there for a very, very good reason.
+@end example
+
+@example
+April 7, 2000
+
+Andy Piper wrote:
+
+> At 03:51 PM 4/6/00 -0700, Ben Wing wrote:
+> >[the function make-gutter-only-dialog-frame is bogus]
+>
+> The problem is that some of the callbacks and such need access to the
+> @strong{created} frame, so you end up in a catch 22 unless you do what I've done.
+
+[Ben proposes other ways to avoid exposing all the guts, as in
+@code{make-gutter-only-dialog-frame}:]
+
+@enumerate
+@item
+Instead of passing in the actual glyph spec or glyph, pass in a
+function of two args (the dialog frame and its parents), which when
+called, creates and returns the appropriate glyph.
+
+@item
+[Better] Provide a way for callbacks to determine where they were
+invoked at.  This is much more general and is what you should really
+do.  For example, have the code that calls the callbacks bind some
+global variables such as widget-callback-current-glyph and
+widget-callback-current-channel, which contain the glyph whose
+callback is being invoked, and the window or frame of the glyph
+(depending on where the glyph is) where the invocation actually
+happened.  That way, the callbacks can easily figure out the dialog
+box and its parent, and not have to worry about embedding it in at
+creation time.
+@end enumerate
+@end example
+
+@example
+April 15, 2000
+I don't understand when you say "the various types of callback".  Are
+you using the callback for various different purposes?
+
+Your widget callbacks should work just like any other callback: they
+take two arguments, one indicating the object to which the callback was
+attached (an image instance, i think), and the event that caused the
+callback to be invoked.
+@end example
+
+@example
+April 17, 2000
+
+I am completely vetoing widget-callback-current-channel.  How about you
+create a new keyword, :new-callback, that is a function of two args,
+like i specified before.
+
+btw if you really are calling your callback using call-interactively,
+why don't you declare a function (interactive "e") and then call
+event-channel on the resulting event?  that should get you the same
+result as widget-callback-current-channel.
+
+the problem with this and everything you've proposed is that there's no
+way, of course, to get at the actual widget that you were invoked from.
+would you propose adding widget-callback-current-widget?
+@end example
+
+@node Old Future Work, Index, Future Work Discussion, Top
+@chapter Old Future Work
+@cindex old future work
+@cindex future work, old
+
+This chapter includes proposals for future work that were later
+implemented.  These proposals are included because they may describe to
+some extent the actual workings of the implemented code, and because
+they may discuss relevant design issues, alternative implementations, or
+work still to be done.
+
+
+@menu
+* Future Work -- A Portable Unexec Replacement::  
+* Future Work -- Indirect Buffers::  
+* Future Work -- Improvements in support for non-ASCII (European) keysyms under X::  
+* Future Work -- xemacs.org Mailing Address Changes::  
+* Future Work -- Lisp callbacks from critical areas of the C code::  
+@end menu
+
+@node Future Work -- A Portable Unexec Replacement, Future Work -- Indirect Buffers, Old Future Work, Old Future Work
+@section Future Work -- A Portable Unexec Replacement
+@cindex future work, a portable unexec replacement
+@cindex a portable unexec replacement, future work
+
+@strong{Abstract:} Currently, during the build stage of XEmacs, a bare
+version of the program (called @dfn{temacs}) is run, which loads up a
+bunch of Lisp data and then writes out a modified executable file.  This
+process is very tricky to implement and highly system-dependent.  It can
+be replaced by a simple, mostly portable, and easy to implement scheme
+where the Lisp data is written out to a separate data file.
+
+The scheme makes only three assumptions about the memory layout of a
+running XEmacs process, which, as far as I know, are met by all current
+implementations of XEmacs (and they're also requirements of the existing
+unexec scheme):
+
+@enumerate
+@item
+
+The initialized data segments of the various XEmacs modules are all laid
+out contiguously in memory and are separated from the initialized data
+segments of libraries that are linked with XEmacs; likewise for
+uninitialized data segments.
+@item
+
+The beginning and end of the XEmacs portion of the combined initialized
+data segment can be programmatically determined; likewise for the
+uninitialized data segment.
+@item
+
+The XEmacs portion of the initialized and uninitialized data segments
+are always loaded at the same place in memory.
+
+@end enumerate
+
+Assumption number three means that this scheme is non-relocatable, which
+is a disadvantage as compared to other, relocatable schemes that have
+been proposed.  However, the advantage of this scheme over them is that
+it is much easier to implement and requires minimal changes to the
+XEmacs code base.
+
+First, let's go over the theory behind the dumping mechanism.  The
+principles that we would like to follow are:
+
+@enumerate
+@item
+
+We write out to disk all of the data structures and all of their
+sub-structures that we have created ourselves, except for data that is
+expected to change from invocation to invocation (in particular, data
+that is extracted from the external environment at run time).
+@item
+
+We don't write out to disk any data structures created or initialized by
+system libraries, by the kernel or by any other code that we didn't
+create ourselves, because we can't count on that code working in the way
+that we want it to.
+@item
+
+At the beginning of the next invocation of our program, we read in all
+those data structures that we have written out to disk, and then
+continue as if we had just created and initialized all of that data
+ourselves.
+@item
+
+We make sure that our own data structures don't have any pointers to
+system data, or if they do, that we note all of these pointers so that
+we can re-create the system data and set up pointers to the data again
+in the next invocation.
+@item
+
+During the next invocation of our program, we re-create all of our own
+data structures that are derived from the external environment.
+
+@end enumerate
+
+XEmacs, of course, is already set up to adhere to most of these
+principles.
+
+In fact, the current dumping process that we are replacing does a few of
+these principles slightly differently and adds a few extra of its own:
+
+@enumerate
+@item
+
+All data structures of all sorts, including system data, are written
+out.  This is the cause of no end of problems, and it is avoidable,
+because we can ensure that our own data and the system data are
+physically separated in memory.
+@item
+
+Our own data structures that we derive from the external environment are
+in fact written out and read in, but then are simply overwritten during
+the next invocation with new data.  Before dumping, we make sure to free
+any such data structure that would cause memory leaks.
+@item
+
+XEmacs carefully arranges things so that all static variables in the
+initialized data are never written to after the dumping stage has
+completed.  This allows for an additional optimization in which we can
+make static initialized data segments in pre-dumped invocations of
+XEmacs be read-only and shared among all XEmacs processes on a single
+machine.
+
+@end enumerate
+
+The difficult part in this process is figuring out where our data
+structures lie in memory so that we can correctly write them out and
+read them back in.  The trick that we use to make this problem solvable
+is to ensure that the heap that is used for all dynamically allocated
+data structures that are created during the dumping process is located
+inside the memory of a large, statically declared array.  This ensures
+that all of our own data structures are contained (at least at the time
+that we dump out our data) inside the static initialized and
+uninitialized data segments, which are physically separated in memory
+from any data treated by system libraries and whose starting and ending
+points are known and unchanging (we know that all of these things are
+true because we require them to be so, as preconditions of being able to
+make use of this method of dumping).
+
+In order to implement this method of heap allocation, we change the
+memory allocation function that we use for our own data.  (It's
+extremely important that this function not be used to allocate system
+data.  This means that we must not redefine the @code{malloc} function
+using the linker, but instead we need to achieve this using the C
+preprocessor, or by simply using a different name, such as
+@code{xmalloc}.  It's also very important that we use the correct
+@code{free} function when freeing dynamically-allocated data, depending
+on whether this data was allocated by us or by the
+
+@node Future Work -- Indirect Buffers, Future Work -- Improvements in support for non-ASCII (European) keysyms under X, Future Work -- A Portable Unexec Replacement, Old Future Work
+@section Future Work -- Indirect Buffers
+@cindex future work, indirect buffers
+@cindex indirect buffers, future work
+
+An indirect buffer is a buffer that shares its text with some other
+buffer, but has its own version of all of the buffer properties,
+including markers, extents, buffer local variables, etc.  Indirect
+buffers are not currently implemented in XEmacs, but they are in GNU
+Emacs, and some people have asked for this feature.  I consider this
+feature somewhat extent-related because much of the work required to
+implement this feature involves tracking extents properly.
+
+In a world with indirect buffers, some buffers are direct, and some
+buffers are indirect.  This only matters when there is more than one
+buffer sharing the same text.  In such a case, one of the buffers can be
+considered the canonical buffer for the text in question.  This buffer
+is a direct buffer, and all buffers sharing the text are indirect
+buffers.  These two kinds of buffers are created differently.  One of
+them is created simply using the @code{make_buffer()} function (or
+perhaps the @code{Fget_buffer_create()} function), and the other kind is
+created using the @code{make_indirect_buffer()} function, which takes
+another buffer as an argument which specifies the text of the indirect
+buffer being created.  Every indirect buffer keeps track of the direct
+buffer that is its parent, and every direct buffer keeps a list of all
+of its indirect buffer children.  This list is modified as buffers are
+created and deleted.  Because buffers are permanent objects, there is no
+special garbage collection-related trickery involved in these parent and
+children pointers.  There should never be an indirect buffer whose
+parent is also an indirect buffer.  If the user attempts to set up such
+a situation using @code{make_indirect_buffer()}, either an error should
+be signaled or the parent of the indirect buffer should automatically
+become the direct buffer that actually is responsible for the text.
+Deleting a direct buffer should perhaps cause all of the indirect buffer
+children to be deleted automatically.  There should be Lisp functions
+for determining whether a buffer is direct or indirect, and other
+functions for retrieving the parents, or the children of the buffer,
+depending on which is appropriate.  (The scheme being described here is
+similar to symbolic links.  Another possible scheme would be analogous
+to hard links, and would make no distinction between direct and indirect
+buffers.  In that case, the text of the buffer logically exists as an
+object separate from the buffer itself and only goes away when the last
+buffer pointing to this text is deleted.)
+
+Other than keeping track of parent and child pointer, the only remaining
+thing required to implement indirect buffers is to ensure that changes
+to the text of the buffer trigger the same sorts of effect in all the
+buffers that share that text.  Luckily there are only three functions in
+XEmacs that actually make changes to the text of the buffer, and they
+are all located in the file @code{insdel.c}.
+
+These three functions are called @code{buffer_insert_string_1()},
+@code{buffer_delete_range()}, and @code{buffer_replace_char()}.  All of
+the subfunctions called by these functions are also in @code{insdel.c}.
+
+The first thing that each of these three functions needs to do is check
+to see if its buffer argument is an indirect buffer, and if so, convert
+it to the indirect buffer's parent.  Once that is done, the functions
+need to be modified so that all of the things they do, other than
+actually changing the buffers text, such as calling
+before-change-functions and after-change-functions, and updating extents
+and markers, need to be done over all of the buffers that are indirect
+children of the buffers being modified; as well as, of course, for the
+buffer itself.  Each step in the process needs to be iterated for all of
+the buffers in question before proceeding to the next step.  For
+example, in @code{buffer_insert_string_1()},
+@code{prepare_to_modify_buffer()} needs to be called in turn, for all of
+the buffers sharing the text being modified.  Then the text itself is
+modified, then @code{insert_invalidate_line_number_cache()} is called
+for all of the buffers, then @code{record_insert()} is called for all of
+the buffers, etc.  Essentially, the operation is being done on all of
+the buffers in parallel, rather than each buffer being processed in
+series.  This is necessary because many of the steps can quit or call
+Lisp code and each step depends on the previous step, and some steps are
+done only once, rather than on each buffer.  I imagine it would be
+significantly easier to implement this, if a macro were created for
+iterating over a buffer, and then all of the indirect children of that
+buffer.
+
+@node Future Work -- Improvements in support for non-ASCII (European) keysyms under X, Future Work -- xemacs.org Mailing Address Changes, Future Work -- Indirect Buffers, Old Future Work
+@section Future Work -- Improvements in support for non-ASCII (European) keysyms under X
+@cindex future work, improvements in support for non-ascii (european) keysyms under x
+@cindex improvements in support for non-ascii (european) keysyms under x, future work
+
+From Martin Buchholz.
+
+If a user has a keyboard with known standard non-ASCII character
+equivalents, typically for European users, then Emacs' default
+binding should be self-insert-command, with the obvious character
+inserted.   For example, if a user has a keyboard with
+
+xmodmap -e "keycode 54 = scaron"
+
+then pressing that key on the keyboard will insert the (Latin-2)
+character corresponding to "scaron" into the buffer.
+
+Note: Emacs 20.6 does NOTHING when pressing such a key (not even an
+error), i.e. even (read-event) ignores this key, which means it can't
+even be bound to anything by a user trying to customize it.
+
+This is implemented by maintaining a table of translations between all
+the known X keysym names and the corresponding (charset, octet) pairs.
+
+   For every key on the keyboard that has a known character correspondence,
+   we define the ascii-character property of the keysym, and make the
+   default binding for the key be self-insert-command.
+
+   The following magic is basically intimate knowledge of X11/keysymdef.h.
+   The keysym mappings defined by X11 are based on the iso8859 standards,
+   except for Cyrillic and Greek.
+
+   In a non-Mule world, a user can still have a multi-lingual editor, by doing
+   (set-face-font "...-iso8859-2" (current-buffer))
+   for all their Latin-2 buffers, etc.
+
+@node Future Work -- xemacs.org Mailing Address Changes, Future Work -- Lisp callbacks from critical areas of the C code, Future Work -- Improvements in support for non-ASCII (European) keysyms under X, Old Future Work
+@section Future Work -- xemacs.org Mailing Address Changes
+@cindex future work, xemacs.org mailing address changes
+@cindex xemacs.org mailing address changes, future work
+
+@subheading Personal addresses
+
+@enumerate
+@item
+
+Everyone who is contributing or has ever contributed code to the XEmacs
+core, or to any of the packages archived at xemacs.org, even if they
+don't actually have an account on any machine at xemacs.org. In fact,
+all of these people should have two mailing addresses at xemacs.org, one
+of which is their actual login name (or potential login name if they
+were ever to have an account), and the other one is in the form of first
+name/last name, similar to the way things are done at Sun.  For example,
+Martin would have two addresses at xemacs.org, @code{martin@@xemacs.org},
+and @code{martin.buchholz@@xemacs.org}, with the latter one simply being
+an alias for the former.  The idea is that in all cases, if you simply
+know the name of any past or present contributor to XEmacs, and you want
+to mail them, you will know immediately how to do this without having to
+do any complicated searching on the Web or in XEmacs documentation.
+@item
+
+Furthermore, I think that all of the email addresses mentioned anywhere
+in the XEmacs source code or documentation should be changed to be the
+corresponding ones at xemacs.org, instead of any other email addresses
+that any contributors might have.
+@item
+
+All the places in the source code where a contributor's name is
+mentioned, but no email addressed is attached, should be found, and the
+correct xemacs.org address should be attached.
+@item
+
+The alias file mapping people's addresses at xemacs.org to their actual
+addresses elsewhere (in the case, as will be true for the majority of
+addresses, where the contributor does not actually have an account at
+xemacs.org, but simply a forwarding pointer), should be viewable on the
+xemacs.org web site through a CGI script that reads the alias file and
+turns it into an HTML table.
+
+@end enumerate
+
+@subheading Package addresses
+
+I also think that for every package archived at xemacs.org, there should
+be three corresponding email addresses at xemacs.org.  For example,
+consider a package such as @code{lazy-shot}.  The addresses associated
+with this package would be:
+
+@table @code
+@item lazy-shot@@xemacs.org
+This is a discussion mailing list about the @code{lazy-shot} package,
+and it should be controlled by Majordomo in the standard fashion.
+@item lazy-shot-patches@@xemacs.org
+This is where patches to the @code{lazy-shot} package are set.  This
+should go to various people who are interested in such patches.  For
+example, the maintainer of @code{lazy-shot}, perhaps the maintainer of
+XEmacs itself, and probably to other people who have volunteered to do
+code review for this package, or for a larger group of packages that
+this package is in.  Perhaps this list should also be maintained by
+Majordomo.
+@item lazy-shot-maintainer@@xemacs.org
+This address is for mailing the maintainer directly.  It is possible
+that this will go to more than one person.  This would particularly be
+the case, for example, if the maintainer is dormant or does not appear
+very responsive to patches.  In this case, the address would also point
+to someone like Steve, who is acting in the maintainer's stead, and who
+will himself apply patches or make other changes to the package as
+maintained in the CVS archive on xemacs.org.
+@end table
+
+It may take a bit of work to track down the current addresses for the
+various package maintainers, and may in general seem like a lot of work
+to set up all of these mail addresses, but I think it's very important
+to make it as easy as possible for random XEmacs users to be able to
+submit patches and report bugs in an orderly fashion.  The general idea
+that I'm striving for is to create as much momentum as possible in the
+XEmacs development community, and I think having the system of mail
+addresses set up will make it much easier for this momentum to be built
+up and to remain.
+
+@uref{../../www.666.com/ben/default.htm,Ben Wing}
+
+@node Future Work -- Lisp callbacks from critical areas of the C code,  , Future Work -- xemacs.org Mailing Address Changes, Old Future Work
+@section Future Work -- Lisp callbacks from critical areas of the C code
+@cindex future work, lisp callbacks from critical areas of the c code
+@cindex lisp callbacks from critical areas of the c code, future work
+
+@example
+There are many places in the XEmacs C code where Lisp functions are
+called, usually because the Lisp function is acting as a callback,
+hook, process filter, or the like.  The lisp code is often called in
+places where some lisp operations are dangerous.  Currently there are
+a lot of ad-hoc schemes implemented to try to prevent these dangerous
+operations from causing problems.  I've added a lot of them myself,
+for example, the @code{call*_trapping_errors()} functions.  Other places,
+such as the pre-gc- and post-gc-hooks, do their own ad hoc processing.
+I'm proposing a scheme that would generalize all of this ad hoc code
+and allow Lisp code to be called in all sorts of sensitive areas of
+the C code, including even within redisplay.
+
+Basically, we define a set of operations that are disallowable because
+they are dangerous.  We essentially assign a bit flag to all of these
+operations.  Whenever any sensitive C code wants to call Lisp code,
+instead of using the standard call* functions, it uses a new set of
+functions, call*_critical, which takes an extra parameter, which is a
+bit mask specifying the set of operations which are disallowed.  The
+basic operations of these functions is simply to set a global variable
+corresponding to the bit mask (more specifically, the functions store
+the previous value of this global variable in an unwind_protect, and
+use bitwise-or to combine the previous value with the new bit mask
+that was passed in).  (Actually, we should first implement a slightly
+lower level function which is called @code{enter_sensitive_code_section()},
+which simply sets up the global variable and the @code{unwind_protect()}, and
+returns a @code{specbind()} value, but doesn't actually call any Lisp code.
+There is a corresponding function @code{exit_sensitive_code_section()}, which
+takes the specbind value as an argument, and unwinds the
+unwind_protect.  The call*_sensitive functions are trivially
+implemented in terms of these lower level functions.)
+
+Corresponding to each of these entries is the C name of the bit flag.
+
+The sets of dangerous operations which can be prohibited are:
+
+OPERATION_GC_PROHIBITED
+1. garbage collection.  When this flag is set, and the garbage
+   collection threshold is reached, garbage collection simply doesn't
+   happen.  It will happen at the next opportunity that it is allowed.
+   Similarly, explicitly calling the Lisp function garbage-collect
+   simply does nothing.
+
+OPERATION_CATCH_ERRORS
+2. signalling an error.  When @code{enter_sensitive_code_section()} is
+   called, with the bit flag corresponding to this prohibited
+   operation.  When this bit flag is passed to
+   @code{enter_sensitive_code_section()}, a catch is set up which catches all
+   errors, signals a warning with @code{warn_when_safe()}, and then simply
+   continues.  This is exactly the same behavior you now get with the
+   @code{call_*_trapping_errors()} functions.  (there should also be some way
+   of specifying a warning level and class here, similar to the
+   @code{call_*_trapping_errors()} functions.  This is not completely
+   important, however, because a standard warning level and class
+   could simply be chosen.)
+
+OPERATION_NO_UNSAFE_OBJECT_DELETION
+3. This flag prohibits deletion of any permanent object (i.e. any
+   object that does not automatically disappear when created, such as
+   buffers, frames, devices, windows, etc...) unless they were created
+   after this bit flag was set.  This would be implemented using a
+   list which stores all of the permanent objects created after this
+   bit flag was set.  This list is reset to its previous value when
+   the call to @code{exit_sensitive_code_section()} occurs.  The motivation
+   here is to allow Lisp callbacks to create their own temporary
+   buffers or frames, and later delete them, but not allow any other
+   permanent objects to be deleted, because C code might be working
+   with them, and not expect them to change.
+
+OPERATION_NO_BUFFER_MODIFICATION
+4. This flag disallows modifications to the text, extent or any other
+   properties of any buffers except those created after this flag was
+   set, just like in the previous entry.
+
+OPERATION_NO_REDISPLAY
+5. This bit flag inhibits any redisplay-related operations from
+   happening, more specifically, any entry into the redisplay-related
+   code.  This includes, for example, the Lisp functions sit-for,
+   force-redisplay, force-cursor-redisplay, window-end with certain
+   arguments to it, and various other functions. When this flag is
+   set, instead of entering the redisplay code, the calling function
+   should simply make sure not to enter the redisplay code, (for
+   example, in the case of window-end), or postpone the redisplay
+   until such a time when it's safe (for example, with sit-for and
+   force-redisplay).
+
+OPERATION_NO_REDISPLAY_SETTINGS_CHANGE
+6. This flag prohibits any modifications to faces, glyphs, specifiers,
+   extents, or any other settings that will affect the way that any
+   window is displayed.
+
+
+The idea here is that it will finally be safe to call Lisp code from
+nearly any part of the C code, simply by setting any combination of
+restricted operation bit flags.  This even includes from within
+redisplay. (in such a case, all of the bit flags need to be set).  The
+reason that I thought of this is that some coding system translations
+might cause Lisp code to be invoked and C code often invokes these
+translations in sensitive places.
+@end example
+
+@c Indexing guidelines
+
+@c I assume that all indexes will be combined.
+@c Therefore, if a generated findex and permutations
+@c cover the ways an index user would look up the entry,
+@c then no cindex is added.
+@c Concept index (cindex) entries will also be permuted.  Therefore, they
+@c have no commas and few irrelevant connectives in them.
+
+@c I tried to include words in a cindex that give the context of the entry,
+@c particularly if there is more than one entry for the same concept.
+@c For example, "nil in keymap"
+@c Similarly for explicit findex and vindex entries, e.g. "print example".
+
+@c Error codes are given cindex entries, e.g. "end-of-file error".
+
+@c pindex is used for .el files and Unix programs
+
+@node Index,  , Old Future Work, Top
+@unnumbered Index
+
+@ignore
+All variables, functions, keys, programs, files, and concepts are
+in this one index.  
+
+All names and concepts are permuted, so they appear several times, one
+for each permutation of the parts of the name.  For example,
+@code{function-name} would appear as @b{function-name} and @b{name,
+function-}.  Key entries are not permuted, however.
+@end ignore
+
+@c Print the indices
+
+@printindex fn
 
 @c Print the tables of contents
 @summarycontents