Mercurial > hg > xemacs-beta
view man/internals/internals.texi @ 2393:2d4dd2ef74e7
[xemacs-hg @ 2004-11-16 07:37:29 by ben]
internals update
internals/internals.texi: Add sections on Basic Types and Low-Level Allocation. Move module
docs here. Incorporate dynamic array and blocktype docs from
source.
Add info on beta releases up to present.
Redo chapter on "Rules When Writing New C Code", grouping stuff
together properly. Put "Major Textual Changes" under this
chapter. Incorporate etc/CODING-STANDARDS.
Add discussion sections on "Instantiators and Generic Property
Accessors" and "Switching to C++". Fill out discussion on garbage
collection.
Incorporate backtraces showing crashes due to problems with
redisplay-critical-section protection.
author | ben |
---|---|
date | Tue, 16 Nov 2004 07:37:31 +0000 |
parents | ce4aa0ef8af1 |
children | a27c2650a716 |
line wrap: on
line source
\input texinfo @c -*-texinfo-*- @c %**start of header @setfilename ../../info/internals.info @settitle XEmacs Internals Manual @c %**end of header @ifinfo @dircategory XEmacs Editor @direntry * Internals: (internals). XEmacs Internals Manual. @end direntry Edition History: Created November 1995 (?) by Ben Wing. XEmacs Internals Manual Version 1.0, March, 1996. XEmacs Internals Manual Version 1.1, March, 1997. XEmacs Internals Manual Version 1.4, March, 2001. XEmacs Internals Manual Version 21.5, October, 2004. @c Please REMEMBER to update edition number in *four* places in this file, @c including adding a line above. Copyright @copyright{} 1992 - 2004 Ben Wing. Copyright @copyright{} 1996, 1997 Sun Microsystems. Copyright @copyright{} 1994 - 1998, 2002, 2003 Free Software Foundation. Copyright @copyright{} 1994, 1995 Board of Trustees, University of Illinois. Permission is granted to make and distribute verbatim copies of this manual provided the copyright notice and this permission notice are preserved on all copies. @ignore Permission is granted to process this file through TeX and print the results, provided the printed document carries copying permission notice identical to this one except for the removal of this paragraph (this paragraph not being relevant to the printed manual). @end ignore Permission is granted to copy and distribute modified versions of this manual under the conditions for verbatim copying, provided that the entire resulting derived work is distributed under the terms of a permission notice identical to this one. Permission is granted to copy and distribute translations of this manual into another language, under the above conditions for modified versions, except that this permission notice may be stated in a translation approved by the Foundation. Permission is granted to copy and distribute modified versions of this manual under the conditions for verbatim copying, provided also that the section entitled ``GNU General Public License'' is included exactly as in the original, and provided that the entire resulting derived work is distributed under the terms of a permission notice identical to this one. Permission is granted to copy and distribute translations of this manual into another language, under the above conditions for modified versions, except that the section entitled ``GNU General Public License'' may be included in a translation approved by the Free Software Foundation instead of in the original English. @end ifinfo @c Combine indices. @synindex cp fn @syncodeindex vr fn @syncodeindex ky fn @syncodeindex pg fn @syncodeindex tp fn @setchapternewpage odd @finalout @titlepage @title XEmacs Internals Manual @subtitle Version 21.5, October 2004 @author Ben Wing @sp 1 Improvements by @sp 1 @author Stephen Turnbull @author Martin Buchholz @author Hrvoje Niksic @author Matthias Neubauer @author Olivier Galibert @author Andy Piper @page @vskip 0pt plus 1fill @noindent Copyright @copyright{} 1992 - 2004 Ben Wing. @* Copyright @copyright{} 1996, 1997 Sun Microsystems. @* Copyright @copyright{} 1994 - 1998, 2002, 2003 Free Software Foundation. @* Copyright @copyright{} 1994, 1995 Board of Trustees, University of Illinois. @sp 2 Version 21.5 @* October 2004.@* Permission is granted to make and distribute verbatim copies of this manual provided the copyright notice and this permission notice are preserved on all copies. Permission is granted to copy and distribute modified versions of this manual under the conditions for verbatim copying, provided also that the section entitled ``GNU General Public License'' is included exactly as in the original, and provided that the entire resulting derived work is distributed under the terms of a permission notice identical to this one. Permission is granted to copy and distribute translations of this manual into another language, under the above conditions for modified versions, except that the section entitled ``GNU General Public License'' may be included in a translation approved by the Free Software Foundation instead of in the original English. @end titlepage @page @node Top, Introduction, (dir), (dir) @ifinfo This Info file contains v21.5 of the XEmacs Internals Manual, October 2004. @end ifinfo @ignore Don't update this by hand!!!!!! Use C-u C-c C-u m (aka C-u M-x texinfo-master-list). NOTE: This command does not include the Index:: menu entry. You must add it by hand. Here are some useful Lisp routines for quickly Texinfo-izing text that has been formatted into ASCII lists and tables. (defun list-to-texinfo (b e) "Convert the selected region from an ASCII list to a Texinfo list." (interactive "r") (save-restriction (narrow-to-region b e) (goto-char (point-min)) (let ((dash-type "^ *-+ +") ;; allow single-letter numbering or roman numerals (letter-type "^ *[[(]?\\([a-zA-Z]\\|[IVXivx]+\\)[]).] +") (num-type "^ *[[(]?[0-9]+[]).] +") dash regexp) (save-excursion (re-search-forward "\\s-*") (cond ((looking-at dash-type) (setq regexp dash-type dash t)) ((looking-at letter-type) (setq regexp letter-type)) ((looking-at num-type) (setq regexp num-type)) ((re-search-forward num-type nil t) (setq regexp num-type)) ((re-search-forward letter-type nil t) (setq regexp letter-type)) ((re-search-forward dash-type nil t) (setq regexp dash-type dash t)) (t (error "No table entries?")))) (if dash (insert "@itemize @bullet\n") (insert "@enumerate\n")) (re-search-forward regexp nil 'limit) (while (not (eobp)) (delete-region (point-at-bol) (point)) (insert "@item\n") ;; move forward over any text following the dash to not screw ;; up remove-spacing. (forward-line 1) (let ((p (point))) (or (re-search-forward regexp nil t) (goto-char (point-max))) ;; trick to avoid using a marker (save-excursion ;; back up so as not to affect the line we're on (beginning of ;; next entry) (forward-line -1) (remove-spacing p (point))))) (beginning-of-line) (if dash (insert "@end itemize\n") (insert "@end enumerate\n"))))) (defun remove-spacing (b e) "Remove leading space from the selected region. This finds the maximum leading blank area common to all lines in the region. This includes all lines any part of which are in the region." (interactive "r") (save-excursion (let ((min 999999) seen) (goto-char e) (end-of-line) (setq e (point)) (goto-char b) (beginning-of-line) (setq b (point)) (while (< (point) e) (cond ((looking-at "^\\s-+") (goto-char (match-end 0)) (setq min (min min (current-column)) seen t)) ((looking-at "^\\s-*$")) (t (setq min 0))) (forward-line 1)) (when (and seen (> min 0)) (goto-char e) (untabify b e) ;; we are at end of line already. (if (not (= (point) (point-at-eol))) (error "Logic error")) ;; Pad line with spaces if necessary (it may be just a blank line) (if (< (current-column) min) (insert-char ?\ (- min (current-column))) (beginning-of-line) (forward-char min)) (kill-rectangle b (point)))))) (defun table-to-texinfo (b e) "Convert the selected region from an ASCII table to a Texinfo table. Assumes entries are separated by a blank line, and the first sexp in each entry is the table heading." (interactive "r") (save-restriction (narrow-to-region b e) (goto-char (point-min)) (insert "@table @code\n") (while (not (eobp)) ;; remember where we want to insert the @item. ;; delete the spacing first since inserting the @item may create ;; a line with no spacing, if there is text following the heading on ;; the same line. (let ((beg (point))) ;; removing the space and inserting the @item will change the ;; position of the end of the region, so to make it easy on us ;; leave point at end so it will be adjusted. (forward-line 1) (let ((beg2 (point))) (or (re-search-forward "^$" nil t) (goto-char (point-max))) (backward-char 1) (remove-spacing beg2 (point))) (ignore-errors (forward-char 2)) (save-excursion (goto-char beg) (insert "@item ") (forward-sexp) (delete-char) (insert "\n")))) (beginning-of-line) (insert "@end table\n"))) A useful Lisp routine for adding markup based on conventions used in plain text files; see doc string below. (defun convert-text-to-texinfo (&optional no-narrow) "Convert text to Texinfo. If the region is active, do the region; otherwise, go from point to the end of the buffer. This query-replaces for various kinds of conventions used in text: @code{} surrounded by ` and ' or followed by a (); @strong{} surrounded by *'s; @file{} something that looks like a file name." (interactive) (if (region-active-p) (save-restriction (narrow-to-region (region-beginning) (region-end)) (convert-comments-to-texinfo t)) (let ((p (point)) (case-replace nil)) (query-replace-regexp "`\\([^']+\\)'\\([^']\\)" "@code{\\1}\\2" nil) (goto-char p) (query-replace-regexp "\\(\\Sw\\)\\*\\(\\(?:\\s_\\|\\sw\\)+\\)\\*\\([^A-Za-z.}]\\)" "\\1@strong{\\2}\\3" nil) (goto-char p) (query-replace-regexp "\\(\\(\\s_\\|\\sw\\)+()\\)\\([^}]\\)" "@code{\\1}\\3" nil) (goto-char p) (query-replace-regexp "\\(\\(\\s_\\|\\sw\\)+\\.[A-Za-z]+\\)\\([^A-Za-z.}]\\)" "@file{\\1}\\3" nil) ))) Macro the generate the "Future Work" section from a title; put point at beginning. (defalias 'make-future (read-kbd-macro "<S-end> <f3> <home> @node SPC <end> RET @section SPC <f4> <home> <up> <C-right> <right> Future SPC Work SPC - - SPC <home> <down> <C-right> <right> Future SPC Work SPC - - SPC <end> RET @cindex SPC future SPC work, SPC <f4> C-r , RET C-x C-x M-l RET @cindex SPC <f4> <home> <C-right> <S-end> M-l , SPC future SPC work RET")) Similar but generates a "Discussion" section. (defalias 'make-discussion (read-kbd-macro "<S-end> <f3> <home> @node SPC <end> RET @section SPC <f4> <home> <up> <C-right> <right> Discussion SPC - - SPC <home> <down> <C-right> <right> Discussion SPC - - SPC <end> RET @cindex SPC discussion, SPC <f4> C-r , RET C-x C-x M-l RET @cindex SPC <f4> <home> <C-right> <S-end> M-l , SPC discussion RET")) Similar but generates an "Old Future Work" section. (defalias 'make-old-future (read-kbd-macro "<S-end> <f3> <home> @node SPC <end> RET @section SPC <f4> <home> <up> <C-right> <right> Old SPC Future SPC Work SPC - - SPC <home> <down> <C-right> <right> Old SPC Future SPC Work SPC - - SPC <end> RET @cindex SPC old SPC future SPC work, SPC <f4> C-r , RET C-x C-x M-l RET @cindex SPC <f4> <home> <C-right> <S-end> M-l , SPC old SPC future SPC work RET")) Similar but generates a general section. (defalias 'make-section (read-kbd-macro "<S-end> <f3> <home> @node SPC <end> RET @section SPC <f4> RET @cindex SPC C-SPC C-g <f4> C-x C-x M-l <home> <down>")) Similar but generates a general subsection. (defalias 'make-subsection (read-kbd-macro "<S-end> <f3> <home> @node SPC <end> RET @subsection SPC <f4> RET @cindex SPC C-SPC C-g <f4> C-x C-x M-l <home> <down>")) @end ignore @menu * Introduction:: Overview of this manual. * Authorship of XEmacs:: * A History of Emacs:: Times, dates, important events. * The XEmacs Split:: * XEmacs from the Outside:: A broad conceptual overview. * The Lisp Language:: An overview. * XEmacs from the Perspective of Building:: * Build-Time Dependencies:: * The Modules of XEmacs:: * Rules When Writing New C Code:: * Regression Testing XEmacs:: * CVS Techniques:: * XEmacs from the Inside:: * Basic Types:: * Low-Level Allocation:: * The XEmacs Object System (Abstractly Speaking):: * How Lisp Objects Are Represented in C:: * Allocation of Objects in XEmacs Lisp:: * The Lisp Reader and Compiler:: * Evaluation; Stack Frames; Bindings:: * Symbols and Variables:: * Buffers:: * Text:: * Multilingual Support:: * Consoles; Devices; Frames; Windows:: * The Redisplay Mechanism:: * Extents:: * Faces:: * Glyphs:: * Specifiers:: * Menus:: * Events and the Event Loop:: * Asynchronous Events; Quit Checking:: * Lstreams:: * Subprocesses:: * Interface to MS Windows:: * Interface to the X Window System:: * Dumping:: * Future Work:: * Future Work Discussion:: * Old Future Work:: * Index:: @detailmenu --- The Detailed Node Listing --- A History of Emacs * Through Version 18:: Unification prevails. * Epoch:: An early graphical split of GNU Emacs. * Lucid Emacs:: One version 19 Emacs. * GNU Emacs 19:: The other version 19 Emacs. * GNU Emacs 20:: The other version 20 Emacs. * XEmacs:: The continuation of Lucid Emacs. The Modules of XEmacs * A Summary of the Various XEmacs Modules:: * Low-Level Modules:: * Basic Lisp Modules:: * Modules for Standard Editing Operations:: * Modules for Interfacing with the File System:: * Modules for Other Aspects of the Lisp Interpreter and Object System:: * Modules for Interfacing with the Operating System:: Rules When Writing New C Code * Introduction to Writing C Code:: * Writing New Modules:: * Working with Lisp Objects:: * Writing Lisp Primitives:: * Writing Good Comments:: * Adding Global Lisp Variables:: * Writing Macros:: * Proper Use of Unsigned Types:: * Major Textual Changes:: * Debugging and Testing:: Major Textual Changes * Great Integral Type Renaming:: * Text/Char Type Renaming:: Regression Testing XEmacs * How to Regression-Test:: * Modules for Regression Testing:: CVS Techniques * Merging a Branch into the Trunk:: Low-Level Allocation * Basic Heap Allocation:: * Stack Allocation:: * Dynamic Arrays:: * Allocation by Blocks:: * Modules for Allocation:: Allocation of Objects in XEmacs Lisp * Introduction to Allocation:: * Garbage Collection:: * GCPROing:: * Garbage Collection - Step by Step:: * Integers and Characters:: * Allocation from Frob Blocks:: * lrecords:: * Low-level allocation:: * Cons:: * Vector:: * Bit Vector:: * Symbol:: * Marker:: * String:: * Compiled Function:: Garbage Collection - Step by Step * Invocation:: * garbage_collect_1:: * mark_object:: * gc_sweep:: * sweep_lcrecords_1:: * compact_string_chars:: * sweep_strings:: * sweep_bit_vectors_1:: Evaluation; Stack Frames; Bindings * Evaluation:: * Dynamic Binding; The specbinding Stack; Unwind-Protects:: * Simple Special Forms:: * Catch and Throw:: * Error Trapping:: Symbols and Variables * Introduction to Symbols:: * Obarrays:: * Symbol Values:: Buffers * Introduction to Buffers:: A buffer holds a block of text such as a file. * Buffer Lists:: Keeping track of all buffers. * Markers and Extents:: Tagging locations within a buffer. * The Buffer Object:: The Lisp object corresponding to a buffer. Text * The Text in a Buffer:: Representation of the text in a buffer. * Ibytes and Ichars:: Representation of individual characters. * Byte-Char Position Conversion:: * Searching and Matching:: Higher-level algorithms. Multilingual Support * Introduction to Multilingual Issues #1:: * Introduction to Multilingual Issues #2:: * Introduction to Multilingual Issues #3:: * Introduction to Multilingual Issues #4:: * Character Sets:: * Encodings:: * Internal Mule Encodings:: * Byte/Character Types; Buffer Positions; Other Typedefs:: * Internal Text API's:: * Coding for Mule:: * CCL:: * Microsoft Windows-Related Multilingual Issues:: * Modules for Internationalization:: Encodings * Japanese EUC (Extended Unix Code):: * JIS7:: Internal Mule Encodings * Internal String Encoding:: * Internal Character Encoding:: Byte/Character Types; Buffer Positions; Other Typedefs * Byte Types:: * Different Ways of Seeing Internal Text:: * Buffer Positions:: * Other Typedefs:: * Usage of the Various Representations:: * Working With the Various Representations:: Internal Text API's * Basic internal-format API's:: * The DFC API:: * The Eistring API:: Coding for Mule * Character-Related Data Types:: * Working With Character and Byte Positions:: * Conversion to and from External Data:: * General Guidelines for Writing Mule-Aware Code:: * An Example of Mule-Aware Code:: * Mule-izing Code:: Microsoft Windows-Related Multilingual Issues * Microsoft Documentation:: * Locales:: * More about code pages:: * More about locales:: * Unicode support under Windows:: * The golden rules of writing Unicode-safe code:: * The format of the locale in setlocale():: * Random other Windows I18N docs:: Consoles; Devices; Frames; Windows * Introduction to Consoles; Devices; Frames; Windows:: * Point:: * Window Hierarchy:: * The Window Object:: * Modules for the Basic Displayable Lisp Objects:: The Redisplay Mechanism * Critical Redisplay Sections:: * Line Start Cache:: * Redisplay Piece by Piece:: * Modules for the Redisplay Mechanism:: * Modules for other Display-Related Lisp Objects:: Extents * Introduction to Extents:: Extents are ranges over text, with properties. * Extent Ordering:: How extents are ordered internally. * Format of the Extent Info:: The extent information in a buffer or string. * Zero-Length Extents:: A weird special case. * Mathematics of Extent Ordering:: A rigorous foundation. * Extent Fragments:: Cached information useful for redisplay. Events and the Event Loop * Introduction to Events:: * Main Loop:: * Specifics of the Event Gathering Mechanism:: * Specifics About the Emacs Event:: * Event Queues:: * Event Stream Callback Routines:: * Other Event Loop Functions:: * Stream Pairs:: * Converting Events:: * Dispatching Events; The Command Builder:: * Focus Handling:: * Editor-Level Control Flow Modules:: Asynchronous Events; Quit Checking * Signal Handling:: * Control-G (Quit) Checking:: * Profiling:: * Asynchronous Timeouts:: * Exiting:: Lstreams * Creating an Lstream:: Creating an lstream object. * Lstream Types:: Different sorts of things that are streamed. * Lstream Functions:: Functions for working with lstreams. * Lstream Methods:: Creating new lstream types. Interface to MS Windows * Different kinds of Windows environments:: * Windows Build Flags:: * Windows I18N Introduction:: * Modules for Interfacing with MS Windows:: Interface to the X Window System * Lucid Widget Library:: An interface to various widget sets. * Modules for Interfacing with X Windows:: Lucid Widget Library * Generic Widget Interface:: The lwlib generic widget interface. * Scrollbars:: * Menubars:: * Checkboxes and Radio Buttons:: * Progress Bars:: * Tab Controls:: Dumping * Dumping Justification:: * Overview:: * Data descriptions:: * Dumping phase:: * Reloading phase:: * Remaining issues:: Dumping phase * Object inventory:: * Address allocation:: * The header:: * Data dumping:: * Pointers dumping:: Future Work * Future Work -- General Suggestions:: * Future Work -- Elisp Compatibility Package:: * Future Work -- Drag-n-Drop:: * Future Work -- Standard Interface for Enabling Extensions:: * Future Work -- Better Initialization File Scheme:: * Future Work -- Keyword Parameters:: * Future Work -- Property Interface Changes:: * Future Work -- Toolbars:: * Future Work -- Menu API Changes:: * Future Work -- Removal of Misc-User Event Type:: * Future Work -- Mouse Pointer:: * Future Work -- Extents:: * Future Work -- Version Number and Development Tree Organization:: * Future Work -- Improvements to the @code{xemacs.org} Website:: * Future Work -- Keybindings:: * Future Work -- Byte Code Snippets:: * Future Work -- Lisp Stream API:: * Future Work -- Multiple Values:: * Future Work -- Macros:: * Future Work -- Specifiers:: * Future Work -- Display Tables:: * Future Work -- Making Elisp Function Calls Faster:: * Future Work -- Lisp Engine Replacement:: Future Work -- Toolbars * Future Work -- Easier Toolbar Customization:: * Future Work -- Toolbar Interface Changes:: Future Work -- Mouse Pointer * Future Work -- Abstracted Mouse Pointer Interface:: * Future Work -- Busy Pointer:: Future Work -- Extents * Future Work -- Everything should obey duplicable extents:: Future Work -- Keybindings * Future Work -- Keybinding Schemes:: * Future Work -- Better Support for Windows Style Key Bindings:: * Future Work -- Misc Key Binding Ideas:: Future Work -- Byte Code Snippets * Future Work -- Autodetection:: * Future Work -- Conversion Error Detection:: * Future Work -- Unicode:: * Future Work -- BIDI Support:: * Future Work -- Localized Text/Messages:: Future Work -- Lisp Engine Replacement * Future Work -- Lisp Engine Discussion:: * Future Work -- Lisp Engine Replacement -- Implementation:: * Future Work -- Startup File Modification by Packages:: Future Work Discussion * Discussion -- Garbage Collection:: * Discussion -- Glyphs:: * Discussion -- Dialog Boxes:: * Discussion -- Multilingual Issues:: * Discussion -- Instantiators and Generic Property Accessors:: * Discussion -- Switching to C++:: * Discussion -- Windows External Widget:: * Discussion -- Packages:: * Discussion -- Distribution Layout:: Discussion -- Garbage Collection * Discussion -- Pure Space:: * Discussion -- Hashtable-Based Marking and Cleanup:: * Discussion -- The Anti-Cons:: Old Future Work * Old Future Work -- A Portable Unexec Replacement:: * Old Future Work -- Indirect Buffers:: * Old Future Work -- Improvements in support for non-ASCII (European) keysyms under X:: * Old Future Work -- RTF Clipboard Support:: * Old Future Work -- xemacs.org Mailing Address Changes:: * Old Future Work -- Lisp callbacks from critical areas of the C code:: @end detailmenu @end menu @node Introduction, Authorship of XEmacs, Top, Top @chapter Introduction @cindex introduction @cindex authorship, manual This manual documents the internals of XEmacs. It presumes knowledge of how to use XEmacs (@pxref{Top,,, xemacs, XEmacs User's Manual}), and especially, knowledge of XEmacs Lisp (@pxref{Top,,, lispref, XEmacs Lisp Reference Manual}). Information in either of these manuals will not be repeated here, and some information in the Lisp Reference Manual in particular is more relevant to a person working on the internals than the average XEmacs Lisp programmer. (In such cases, a cross-reference is usually made to the Lisp Reference Manual.) Ideally, this manual would be complete and up-to-date. Unfortunately, in reality it is neither, due to the limited resources of the maintainers of XEmacs. (That said, it is much better than the internal documentation of most programs.) Also, much information about the internals is documented only in the code itself, in the form of comments. Furthermore, since the maintainers are more likely to be working on the code than on this manual, information contained in comments may be more up-to-date than information in this manual. Do not assume that all information in this manual is necessarily accurate as of the snapshot of the code you are looking at, and in the case of contradictions between the code comments and the manual, @strong{always} assume that the code comments are correct. (Because of the proximity of the comments to the code, comments will rarely be out-of-date.) The manual is organized in chapters which are broadly grouped into major divisions: @enumerate @item First is the introduction, including this chapter and chapters on the history and authorship of XEmacs. @item Next, starting with @ref{XEmacs from the Outside}, are a couple of chapters giving a broad overview of the internal workings of XEmacs. @item Afterwards, starting with @ref{XEmacs from the Perspective of Building}, are some chapters documenting important information relevant to those working on the code. @item The remaining divisions document the nitty-gritty details of the internal workings. First, starting with @ref{XEmacs from the Inside}, is a division on the low-level types and allocation routines and the workings of the Lisp interpreter that drives XEmacs. @item Next, starting with @ref{Buffers}, is a division on the parts of the code specifically devoted to text processing, including multilingual support (Mule). @item Afterwards, starting with @ref{Consoles; Devices; Frames; Windows}, is a division covering the display mechanism and the objects and modules relevant to this. @item Then, starting with @ref{Events and the Event Loop}, is a division covering the interface between XEmacs and the outside world, including user interactions, subprocesses, file I/O, interfaces to particular windowing systems, and dumping. @item Finally, starting with @ref{Future Work}, is a division containing proposals and discussion relating to future work on XEmacs. @end enumerate This manual was primarily written by Ben Wing. Certain sections were written by others, including those mentioned on the title page as well as other coders. Some sections were lifted directly from comments in the code, and in those cases we may not completely be aware of the authorship. In addition, due to the collaborative nature of XEmacs, many people have made small changes and emendations as they have discovered problems. The following is a (necessarily incomplete) list of the work that was @emph{not} done by Ben Wing (for more complete information, take a look at the ChangeLog for the @file{man} directory and the CVS records of actual changes): @table @asis @item Stephen Turnbull Various cleanup work, mostly post-2000. Object-Oriented Techniques in XEmacs. A Reader's Guide to XEmacs Coding Conventions. Searching and Matching. Regression Testing XEmacs. Modules for Regression Testing. Lucid Widget Library. A number of sections in the Future Work chapter. @item Martin Buchholz Various cleanup work, mostly pre-2001. Docs on inline functions. Docs on dfc conversion functions (Conversion to and from External Data). Improvements in support for non-ASCII (European) keysyms under X. A section or two in the Future Work chapter. @item Hrvoje Niksic Coding for Mule. @item Matthias Neubauer Garbage Collection - Step by Step. @item Olivier Galibert Portable dumper documentation. @item Andy Piper Redisplay Piece by Piece. Glyphs. @item Chuck Thompson Line Start Cache. @item Kenichi Handa CCL. @item Jamie Zawinski A couple of sections in the Future Work chapter. @end table @node Authorship of XEmacs, A History of Emacs, Introduction, Top @chapter Authorship of XEmacs @cindex authorship, XEmacs General authorship in chronological order: @table @asis @item Jamie Zawinski, Eric Benson, Matthieu Devin, Harlan Sexton These were the early creators of Lucid Emacs, the predecessor of Xemacs. Jamie Zawinski was the primary maintainer and coder for Lucid Emacs— active between early 1991 and June 1994. He presided over versions 19.0 through 19.10, and then abruptly left for Netscape. He wrote the advanced stream code, the Xt interface code, the byte compiler, the original version of the X selection code, the first, second and third versions of the face code which appeared in 19.0, 19.6 and 19.9 respectively. Part of the keymap code separated the Lisp directories into many subdirectories and many smaller changes. Matthieu Devin wrote the original version of the Extents code. Someone else at Lucid wrote the Lucid widget library (LWLIB), with the exception of the scrollbar code, which was added later. @item Richard Mlynarik Active 1991 to 1993, author of much of the current Lisp object scheme, including Lrecords and LC records (added this support in 1993 to allow for 28-bit pointers, which had previously been restricted to 26 bits.) Moved the minibuffer and abbreve code into Lisp, worked on the keymap code and did the initial synching between Xemacs and the first released version of GNU Emacs version 19 in mid-1993. @item Martin Buchholz Active 1995 to 2001, maintainer of Xemacs late 1999 to ?, author of the current configure support, mini optimizations to the byte interpreter, many improvements to the case changing code and many bug fixes to the process and system-specific code, also general spell checking and code cleanliness guru. @item Steve Baur Maintainer of Xemacs 1996 to 1999, responsible for many improvements to the Xemacs development process, for example, creation of the review board and arranging for Xemacs to be placed under CVS. Author of the package code. @item Chuck Thompson Active January 1993 to June of 1996, author of the current and previous ve3rsions of the redisplay code and maintainer of Xemacs from mid-1994 to mid-1996. Creator of XEMacs.org. Also wrote the scrollbar code, the original configure support, and prototype versions of the toolbar and device code. @item Ben Wing Active April 1993 to April 1996 and February 2000 to present. Chief coder for Xemacs between 1994 and 1996. Ben Wing was never the maintainer of Xemacs, and as a result, is the author of more of the Xemacs specific code in Xemacs than anyone else. Author of the mule support (Extense code), the glis-phonetically spelled-and specifiers code most of the toolbars, and device distraction code, the error checking code, the Lstream code, the bit vector, char-table, and range-table code, much of the current Xt code, much, much of the events code (including most of the TTY event code), some of the phase code, and numerous other aspects of the code. Also author of most of the Xemacs documentation including the internals manual and the Xemacs editions to the Lisp reference manual, and responsible for much of the synching between Xemacs and GNU Emacs. @item Kyle Jones Author of the minimal tag bits support, which allows for 32-bit pointers and 31-bit integers. @item Olivier Galibert Author of the portable dumping mechanism. @item Andy Piper Author of the widget support, the gutter support and much of the Microsoft Windows support. @item Kirill Katsnelson Author of many improvements to Microsoft Windows support, the current sub-process code, and revamping of the display size change mechanism. @item Jonathan Harris Author of much of the Microsoft Windows support. @end table Authorship of some of the modules: @table @file @item alloc.c Inherited 1991 from a prototype of GNU Emacs 19. Around mid-1993 Richard Mlynarik redid much of the code, creating the existing system of object abstractions, (where each object can define its own marking method, printing method, and so on) and the existing scheme of Lrecords and LC records. This was done both to increase the number of bits that a pointer can occupy from 26 to 28, and provide a general framework for creating new object types easily. The garbage collection and froblock-phonetically spelled-allocation code is left over from the original version, but was cleaned up somewhat by Mlynarik. Later in 1993, Jamie Zawinski improved the code that kept track of pure space usage so it would report exactly where you exceeded the pure space and how much pure space you are going to have to add to get everything to fit. He also added code to issue nice pure space and garbage collections statistics at the end of dumping. Early in 1995, Ben Wing cleaned up the froblock code to be as compact as possible, added the various bits of error checking, which are controlled using the _ErrorCheck*. He also added the ability of strings to be resized, which is necessary under MULE, because you can replace one character in a string with another character of a different size. As a result, the string resizes. Ben Wing also added bit factors for 1913 around September 1995, and Elsie record lists for 1914 around December 1995. Steve Baur did some work on the purification and dump time code, and added Doug Lea Malloc support from Emacs 20.2 circa 1998. Kyle Jones continued to work done by Mlynarik, reducing the number of primitive Lisp types so that there are only three: integer character and pointer type, which encompasses all other types. This allows for 31-bit integers and 32-bit pointers, although there is potential slowdown in some extra in directions when determining the type of an object, and some memory increase for the objects that previously were considered to be the most primitive types. Martin Buchholz has recently (February 2000) done some work to eliminate most of the slowdown. Olivier Galibert, mid-1999 to 2000, implemented the portable dumper. This writes out the state of the Lisp object heap to disk file in a real locatable fashion so that it can later be read in at any memory location. This work entails a number of changes in Alec.C. For example, pure space was removed and structures were created to define the types of all the elements contained in the various lisp object structures and associated structures. @item alloca.c Inherited a long time ago from a prerelease version of GNU Emacs 19, kept in sync with more recent versions very few changes from Xemacs. Most changes consist of converting the code to ANSI C, and fixing up the includes at the top of the file to follow Xemacs conventions. @item alloca.s Inherited almost unchanged from FSF kept in sync up through 19.30 basically no changes for Xemacs. @end table @node A History of Emacs, The XEmacs Split, Authorship of XEmacs, Top @chapter A History of Emacs @cindex history of Emacs, a @cindex Emacs, a history of @cindex Hackers (Steven Levy) @cindex Levy, Steven @cindex ITS (Incompatible Timesharing System) @cindex Stallman, Richard @cindex RMS @cindex MIT @cindex TECO @cindex FSF @cindex Free Software Foundation XEmacs is a powerful, customizable text editor and development environment. It began as Lucid Emacs, which was in turn derived from GNU Emacs, a program written by Richard Stallman of the Free Software Foundation. GNU Emacs dates back to the 1970's, and was modelled after a package called ``Emacs'', written in 1976, that was a set of macros on top of TECO, an old, old text editor written at MIT on the DEC PDP 10 under one of the earliest time-sharing operating systems, ITS (Incompatible Timesharing System). (ITS dates back well before Unix.) ITS, TECO, and Emacs were products of a group of people at MIT who called themselves ``hackers'', who shared an idealistic belief system about the free exchange of information and were fanatical in their devotion to and time spent with computers. (The hacker subculture dates back to the late 1950's at MIT and is described in detail in Steven Levy's book @cite{Hackers}. This book also includes a lot of information about Stallman himself and the development of Lisp, a programming language developed at MIT that underlies Emacs.) @menu * Through Version 18:: Unification prevails. * Epoch:: An early graphical split of GNU Emacs. * Lucid Emacs:: One version 19 Emacs. * GNU Emacs 19:: The other version 19 Emacs. * GNU Emacs 20:: The other version 20 Emacs. * XEmacs:: The continuation of Lucid Emacs. @end menu @node Through Version 18, Epoch, A History of Emacs, A History of Emacs @section Through Version 18 @cindex version 18, through @cindex Gosling, James @cindex Great Usenet Renaming As described above, Emacs began life in the mid-1970's as a series of editor macros for TECO, an early editor on the PDP-10. In the early 1980's it was rewritten in C as a collaboration between Richard M. Stallman (RMS) and James Gosling (the creator of Java); its extension language was known as @dfn{Mocklisp}. This version of Emacs-in-C formed the basis for the early versions of GNU Emacs and also for Gosling's Unipress Emacs, a commercial product. Because of bad blood between the two over the issue of commercialism, RMS pretty much disowned this collaboration, referring to it as "Gosling Emacs". At this point we pick up with a time line of events. (A broader timeline is available at @uref{http://http://www.jwz.org/doc/emacs-timeline.html, ``Emacs Timeline''}.) @itemize @bullet @item Unipress Emacs, a $395 commercial product, was released on May 6, 1983. This was an outgrowth of the Emacs-in-C collaboration written by Gosling and RMS. @item GNU Emacs version 13.0? was released on March 20, 1985. This may have been the initial public release. This was also based on this same Emacs-in-C collaboration. @item GNU Emacs version 15.10 was released on April 11, 1985. @item GNU Emacs version 15.34 was released on May 7, 1985. This appears to be the last release of version 15. @item GNU Emacs version 16 (first released version was 16.56) was released on July 15, 1985. All Gosling code was removed due to potential copyright problems with the code. @item Version 16.57: released on September 16, 1985. @item Versions 16.58, 16.59: released on September 17, 1985. @item Version 16.60: released on September 19, 1985. These later version 16's incorporated patches from the net, esp. for getting Emacs to work under System V. @item Version 17.36 (first official v17 release) released on December 20, 1985. Included a TeX-able user manual. First official unpatched version that worked on vanilla System V machines. @item Version 17.43 (second official v17 release) released on January 25, 1986. @item Version 17.45 released on January 30, 1986. @item Version 17.46 released on February 4, 1986. @item Version 17.48 released on February 10, 1986. @item Version 17.49 released on February 12, 1986. @item Version 17.55 released on March 18, 1986. @item Version 17.57 released on March 27, 1986. @item Version 17.58 released on April 4, 1986. @item Version 17.61 released on April 12, 1986. @item Version 17.63 released on May 7, 1986. @item Version 17.64 released on May 12, 1986. @item Version 18.24 (a beta version) released on October 2, 1986. @item Version 18.30 (a beta version) released on November 15, 1986. @item Version 18.31 (a beta version) released on November 23, 1986. @item Version 18.32 (a beta version) released on December 7, 1986. @item Version 18.33 (a beta version) released on December 12, 1986. @item Version 18.35 (a beta version) released on January 5, 1987. @item Version 18.36 (a beta version) released on January 21, 1987. @item January 27, 1987: The Great Usenet Renaming. net.emacs is now comp.emacs. @item Version 18.37 (a beta version) released on February 12, 1987. @item Version 18.38 (a beta version) released on March 3, 1987. @item Version 18.39 (a beta version) released on March 14, 1987. @item Version 18.40 (a beta version) released on March 18, 1987. @item Version 18.41 (the first ``official'' release) released on March 22, 1987. @item Version 18.45 released on June 2, 1987. @item Version 18.46 released on June 9, 1987. @item Version 18.47 released on June 18, 1987. @item Version 18.48 released on September 3, 1987. @item Version 18.49 released on September 18, 1987. @item Version 18.50 released on February 13, 1988. @item Version 18.51 released on May 7, 1988. @item Version 18.52 released on September 1, 1988. @item Version 18.53 released on February 24, 1989. @item Version 18.54 released on April 26, 1989. @item Version 18.55 released on August 23, 1989. This is the earliest version that is still available by FTP. (Verified in November 2004.) @item Version 18.56 released on January 17, 1991. @item Version 18.57 released late January, 1991. @item Version 18.58 released sometime in 1991. @item Version 18.59 released October 31, 1992. @end itemize @node Epoch, Lucid Emacs, Through Version 18, A History of Emacs @section Epoch @cindex Epoch @cindex UIUC #### Document Epoch A time line for Epoch is @itemize @bullet @item Epoch 1.0 released December 14, 1989. (by Simon Kaplan, Chris Love, et al.) @item Epoch 2.0 released December 23, 1989. @item Epoch 3.1 released February 6, 1990. @item Epoch 3.2 released December[????] 11, 1990. @item Epoch 4.0 released August 27, 1990. @end itemize @node Lucid Emacs, GNU Emacs 19, Epoch, A History of Emacs @section Lucid Emacs @cindex Lucid Emacs @cindex Lucid Inc. @cindex Energize @cindex Epoch Lucid Emacs was developed by the (now-defunct) Lucid Inc., a maker of C++ and Lisp development environments. It began when Lucid decided they wanted to use Emacs as the editor and cornerstone of their C++ development environment (called ``Energize''). They needed many features that were not available in the existing version of GNU Emacs (version 18.5something), in particular good and integrated support for GUI elements such as mouse support, multiple fonts, multiple window-system windows, etc. A branch of GNU Emacs called Epoch, written at the University of Illinois, existed that supplied many of these features; however, Lucid needed more than what existed in Epoch. At the time, the Free Software Foundation was working on version 19 of Emacs (this was sometime around 1991), which was planned to have similar features, and so Lucid decided to work with the Free Software Foundation. Their plan was to add features that they needed, and coordinate with the FSF so that the features would get included back into Emacs version 19. Delays in the release of version 19 occurred, however (resulting in it finally being released more than a year after what was initially planned), and Lucid encountered unexpected technical resistance in getting their changes merged back into version 19, so they decided to release their own version of Emacs, which became Lucid Emacs 19.0. @cindex Zawinski, Jamie @cindex Sexton, Harlan @cindex Benson, Eric @cindex Devin, Matthieu The initial authors of Lucid Emacs were Matthieu Devin, Harlan Sexton, and Eric Benson, and the work was later taken over by Jamie Zawinski, who became ``Mr. Lucid Emacs'' for many releases. A time line for Lucid Emacs is @itemize @bullet @item Version 19.0 shipped with Energize 1.0, April 1992. @item Version 19.1 released June 4, 1992. @item Version 19.2 released June 19, 1992. @item Version 19.3 released September 9, 1992. @item Version 19.4 released January 21, 1993. @item Version 19.5 released February 5, 1993. This was a repackaging of 19.4 with a few bug fixes and shipped with Energize 2.0. It was a trade-show giveaway and never released to the net. @item Version 19.6 released April 9, 1993. @item Version 19.7 was a repackaging of 19.6 with a few bug fixes and shipped with Energize 2.1. Never released to the net. @item Version 19.8 released September 6, 1993. (Epoch 4.0 merger of redisplay code, preliminary I18N support, code merged from GNU Emacs 19.8 beta) @item Version 19.9 released January 12, 1994. (Scrollbars, Athena.) @item Version 19.10 released May 27, 1994. (Uses `configure'; code merged from GNU Emacs 19.23 beta and further merging with Epoch 4.0) Known as "Lucid Emacs" when shipped by Lucid, and as "XEmacs" when shipped by Sun; but Lucid went out of business a few days later and it's unclear very many copies of 19.10 were released by Lucid. (Last release by Jamie Zawinski.) @end itemize @node GNU Emacs 19, GNU Emacs 20, Lucid Emacs, A History of Emacs @section GNU Emacs 19 @cindex GNU Emacs 19 @cindex Emacs 19, GNU @cindex version 19, GNU Emacs @cindex FSF Emacs About a year after the initial release of Lucid Emacs, the FSF released a beta of their version of Emacs 19 (referred to here as ``GNU Emacs''). By this time, the current version of Lucid Emacs was 19.6. (Strangely, the first released beta from the FSF was GNU Emacs 19.7.) A time line for GNU Emacs version 19 is @itemize @bullet @item Version 19.7 beta released May 22, 1993. First public beta v19 release. @item Version 19.8 beta released May 27, 1993. @item Version 19.9 beta released May 27, 1993. @item Version 19.10 beta released May 30, 1993. @item Version 19.11 beta released June 1, 1993. @item Version 19.12 beta released June 2, 1993. @item Version 19.13 beta released June 8, 1993. @item Version 19.14 beta released June 17, 1993. @item Version 19.15 beta released June 19, 1993. @item Version 19.16 beta released July 6, 1993. @item Version 19.17 beta released late July, 1993. @item Version 19.18 beta released August 9, 1993. @item Version 19.19 beta released August 15, 1993. @item Version 19.20 beta released November 17, 1993. @item Version 19.21 beta released November 17, 1993. @item Version 19.22 beta released November 28, 1993. @item Version 19.23 beta released May 17, 1994. @item Version 19.24 beta released May 16, 1994. @item Version 19.25 beta released June 3, 1994. @item Version 19.26 beta released September 11, 1994. @item Version 19.27 beta released September 14, 1994. @item Version 19.28 (first ``official'' release) released November 1, 1994. @item Version 19.29 released June 21, 1995. @item Version 19.30 released November 24, 1995. @item Version 19.31 released May 25, 1996. @item Version 19.32 released July 31, 1996. @item Version 19.33 released August 11, 1996. @item Version 19.34 released August 21, 1996. @item Version 19.34b released September 6, 1996. @end itemize @cindex Mlynarik, Richard @cindex Baur, Steve In some ways, GNU Emacs 19 was better than Lucid Emacs; in some ways, worse. Lucid soon began incorporating features from GNU Emacs 19 into Lucid Emacs; for the first year, the work was mostly done by Richard Mlynarik, who had been working on and using GNU Emacs for a long time (back as far as version 16 or 17). After that, Lucid folded and Sun continued with XEmacs; further merging work has continued up through the present, done mostly by Ben Wing but a good deal of synching was done by Steve Baur in 1996 with GNU Emacs 19.34. @node GNU Emacs 20, XEmacs, GNU Emacs 19, A History of Emacs @section GNU Emacs 20 @cindex GNU Emacs 20 @cindex Emacs 20, GNU @cindex version 20, GNU Emacs @cindex FSF Emacs On February 2, 1997 work began on GNU Emacs to integrate Mule. The first release was made in September of that year. A timeline for GNU Emacs 20 is @itemize @bullet @item Version 20.1 released September 17, 1997. @item Version 20.2 released September 20, 1997. @item Version 20.3 released August 19, 1998. @item version 20.4 released July 12, 1999; on comp.emacs, July 27. @item version 20.5 released ???. @item version 20.6 released ???. @item version 20.7 released ???. @end itemize A timeline for GNU Emacs 21 is @itemize @bullet @item version 21.1 released October 20, 2001. @item Version 21.2 released March 16, 2002. @item Version 21.3 released March 19, 2003. @end itemize @node XEmacs, , GNU Emacs 20, A History of Emacs @section XEmacs @cindex XEmacs @cindex Sun Microsystems @cindex University of Illinois @cindex Illinois, University of @cindex SPARCWorks @cindex Andreessen, Marc @cindex Baur, Steve @cindex Buchholz, Martin @cindex Kaplan, Simon @cindex Wing, Ben @cindex Thompson, Chuck @cindex Win-Emacs @cindex Epoch @cindex Amdahl Corporation Around the time that Lucid was developing Energize, Sun Microsystems was developing their own development environment (called ``SPARCWorks'') and also decided to use Emacs. They joined forces with the Epoch team at the University of Illinois and later with Lucid. The maintainer of the last-released version of Epoch was Marc Andreessen, but he dropped out and the Epoch project, headed by Simon Kaplan, lured Chuck Thompson away from a system administration job to become the primary Lucid Emacs author for Epoch and Sun. Chuck's area of specialty became the redisplay engine (he replaced the old Lucid Emacs redisplay engine with a ported version from Epoch and then later rewrote it from scratch). Sun also hired Ben Wing (the author of Win-Emacs, a port of Lucid Emacs to Microsoft Windows 3.1) in 1993, for what was initially a one-month contract to fix some event problems but later became a many-year involvement, punctuated by a six-month contract with Amdahl Corporation. @cindex rename to XEmacs @cindex Thompson, Chuck @cindex Wing, Ben In 1994, Sun and Lucid agreed to rename Lucid Emacs to XEmacs (a name not favorable to either company); the first release called XEmacs was version 19.11. In June 1994, Lucid folded and Jamie quit to work for the newly formed Mosaic Communications Corp., later Netscape Communications Corp. (co-founded by the same Marc Andreessen, who had quit his Epoch job to work on a graphical browser for the World Wide Web). Chuck and Ben then become the primary authors and maintainers of XEmacs, with Chuck putting out versions 19.11 through 19.14 in conjunction with Ben. For 19.12 through 19.14, Chuck added the new redisplay and various other display improvements and Ben added MULE support (support for Asian and other languages), multi-device support, glyphs, specifiers, and GIF/JPG/PNG support, and redesigned most of the internal Lisp subsystems to better support the MULE work, display work and the various other features being added to XEmacs. After 19.14 Chuck retired from XEmacs and Steve Baur stepped in as release engineer. Ben Wing continued on as the primary author and architect of XEmacs and has remained, sometimes on-and-off, with XEmacs until the present day (late 2004), being responsible for perhaps 75% of all the non-FSF code in the core (i.e. not the packages) of XEmacs. @cindex MULE merged XEmacs appears Soon after 19.13 was released, work began in earnest on the MULE internationalization code and the source tree was divided into two development paths. The MULE version was initially called 19.20, but was soon renamed to 20.0. In 1996 Martin Buchholz of Sun Microsystems took over the care and feeding of it and worked on it in parallel with the 19.14 development that was occurring at the same time. After much work by Martin, it was decided to release 20.0 ahead of 19.15 in February 1997. The source tree remained divided until 20.2 when the version 19 source was finally retired at version 19.16. @cindex Baur, Steve @cindex Buchholz, Martin @cindex XEmacs goes it alone In 1997, Sun finally dropped all pretense of support for XEmacs and Martin Buchholz left the company in November. Since then, and mostly for the previous year, because Steve Baur was never paid to work on XEmacs, XEmacs has existed solely on the contributions of volunteers from the Free Software Community. @cindex Jones, Kyle @cindex Niksic, Hrvoje @cindex Galibert, Olivier @cindex Piper, Andy @cindex Harris, Jonathan @cindex Katsnelson, Kirill @cindex Turnbull, Stephen @cindex Shelton, Vin @cindex Wing, Ben Between 1997 and 2000, MS-Windows support was added and stabilized by Jonathan Harris, Andy Piper, Ben Wing and Kirill Katsnelson. Hrvoje Niksic and Kyle Jones figured prominently in XEmacs development during these same years. Steve Baur added the package system in 1997 (?), and Olivier Galibert also added the portable dumper support around 2000. Martin Buchholz took over from Steve Baur as release manager in late 1998 (?), and continued in this position through to eary 2000 (?), when Stephen Turnbull took it over. XEmacs has also been split into stable and experimental branches since early 1999, and Vin Shelton has been the release manager of the stable branches since the beginning. Ben Wing suffered severe pain problems throughout much of this time, making him unable to use his hands, but he contributed when he could, especially in the form of dictated design documents. @cindex Sperber, Michael @cindex Turnbull, Stephen @cindex James, Jerry @cindex Youngs, Steve @cindex Aichner, Adrian @cindex Wing, Ben @cindex Crestani, Marcus @cindex Perry, Bill @cindex Purvis, Malcolm @cindex Shelton, Vin Starting around 2000, Kyle, Hrvoje, Martin and Kirill became less active. Jonathan Harris had dropped out of the project around 1998, and Andy Piper became mostly inactive by the year 2001 or 2002. New faces appeared, however, and others continued strong: @itemize @bullet @item Michael Sperber, who had been in the background as a beta tester for a fair amount of time, began to assume a more active role. He revamped the path-searching code at initialization time, did some major work on the CVS repositories, and is in the process of a major project to replace the garbage collector, which he is overseeing with some of his students (e.g. Marcus Crestani). @item Steve Youngs stepped in as package maintainer in late 1998 (?). @item Stephen Turnbull has contained to produce the experimental beta releases, write code when he can, produce many design documents, and generally oversee the managerial aspects of the project. @item Jerry James appeared on the scene in early 2002 and has contributed a large amount of code, including the module subsystem, bignums, and lots of other code cleanup. @item Bill Perry, who had been active on and off in XEmacs since the early 1990's (e.g. he did a fair amount of work on the JPG and PNG interface and added the TIFF interface, in addition to writing the Emacs/W3 browser), added GTK support for XEmacs, a major project for which he received a multi-month contract through BeOpen (?). He has since disappeared but Malcolm Purvis has taken up the GTK project again and is keeping it going when he has time. @item Adrian Aichner is continuing to create and update the web site on @uref{www.xemacs.org,XEmacs Web Site}, and is a particularly active beta tester. @item Ben Wing has recovered somewhat from the bad years of 1997 - 1999 and has resumed his position as Architect of XEmacs and chief code contributor to the project. He added Mule on Windows support, Unicode support, the Internals manual (originally written by him during his last days at Sun) and many other projects, and is now working on a new behaviors system and cleanups of various other subsystems. @item Vin Shelton continues to put out stable releases of XEmacs. @end itemize @cindex merging attempts Many attempts have been made to merge XEmacs and GNU Emacs, but they have consistently failed. A more detailed history is contained in the XEmacs About page. For more detailed information about the features added to each version, see the files @file{NEWS}, @file{ONEWS}, and @file{OONEWS} in the @file{etc/} directory. A time line for XEmacs is @itemize @bullet @item version 19.11 (first XEmacs) released September 13, 1994. @item Initial work on Mule support begins September 1994 by both Ben Wing and Stig. Both projects got bogged down in other issues. @item version 19.12 released June 23, 1995. (The Release Times 10. Included rewritten redisplay, TTY support, multi-device support, device and console objects, specifiers, glyphs, toolbars, horizontal scrollbars, Lucid scrollbar widget, 3-d modeline, stay-up Lucid menus, resizable minibuffer, echo area is a true buffer, MD5 hashing support, expanded menubar, redone menu specification format (including menu filters), rewritten extents, renamed "screen" to "frame", misc-user events, rewritten face code, rewritten mouse code, warnings system, CL backquote syntax, critical C-g, code merging with GNU Emacs 19.28. New packages Hyperbole, OOBR, hm--html-menus, viper, lazy-lock, ksh-mode, rsz-minibuf.) @item Mule work done in earnest from May through November, 1995 by Ben Wing. Early on, much of the work involved Mule-izing and was incorporated into 19.12 and 19.13. After the release of 19.13, further work was forked onto a new development branch, which eventually became 20.0. @item version 19.13 released September 1, 1995. (Bug-fix release. Message logging, background pixmaps, sticky modifiers, Linux audio support, new Elisp manual, keyboard-translate-table. New packages ada-mode, arc-mode, auto-show-mode, completion, dabbrev, easymenu, live-icon, mailcrypt 3.2, two-column.) @item xemacs.org created, date ??? -- early 1996?. @item version 19.14 released June 23, 1996. (TTY colors, mousable/color modeline, GIF/JPEG/PNG support, file dialog box, blinking cursor, gnuattach, auto scrolling horizontally to keep point in view, major code merging with GNU Emacs 19.30, key bindings from GNU Emacs 19.30, surrogate minibuffers, function-key-map, key-translation-map. New packages PSGML, Java/VRML modes, GNUS 5.2.) @item version 20.0 released February 9, 1997. @item version 19.15 released March 28, 1997. (Custom, widget, new logo and background color, introduction of `compatible' variables, major code merging with GNU Emacs 19.30. New packages EFS, TM, AUC Tex, redo, igrep, uniquify, many others.) @item version 20.1 (not released to the net) April 15, 1997. @item version 20.2 released May 16, 1997. @item version 19.16 released October 31, 1997. (Bug-fix release. Faster font-locking. Not much else.) @item version 20.3 (the first stable version of XEmacs 20.x) released November 30, 1997. @item version 20.4 released February 28, 1998. @item version 21.0.60 released December 10, 1998. (The version naming scheme was changed at this point: [a] the second version number is odd for stable versions, even for beta versions; [b] a third version number is added, replacing the "beta xxx" ending for beta versions and allowing for periodic maintenance releases for stable versions. Therefore, 21.0 was never "officially" released; similarly for 21.2, etc.) @item version 21.0.61 released January 4, 1999. @item version 21.0.63 released February 3, 1999. @item version 21.0.64 released March 1, 1999. @item version 21.0.65 released March 5, 1999. @item version 21.0.66 released March 12, 1999. @item version 21.0.67 released March 25, 1999. @item version 21.1.2 released May 14, 1999. (This is the followup to 21.0.67. The second version number was bumped to indicate the beginning of the "stable" series.) @item version 21.1.3 released June 26, 1999. @item version 21.1.4 released July 8, 1999. @item version 21.1.6 released August 14, 1999. (There was no 21.1.5.) @item version 21.1.7 released September 26, 1999. @item version 21.1.8 released November 2, 1999. @item version 21.1.9 released February 13, 2000. @item version 21.1.10 released May 7, 2000. @item version 21.1.10a released June 24, 2000. @item version 21.1.11 released July 18, 2000. @item version 21.1.12 released August 5, 2000. @item version 21.1.13 released January 7, 2001. @item version 21.1.14 released January 27, 2001. @item version 21.2.9 released February 3, 1999. @item version 21.2.10 released February 5, 1999. @item version 21.2.11 released March 1, 1999. @item version 21.2.12 released March 5, 1999. @item version 21.2.13 released March 12, 1999. @item version 21.2.14 released May 14, 1999. @item version 21.2.15 released June 4, 1999. @item version 21.2.16 released June 11, 1999. @item version 21.2.17 released June 22, 1999. @item version 21.2.18 released July 14, 1999. @item version 21.2.19 released July 30, 1999. @item version 21.2.20 released November 10, 1999. @item version 21.2.21 released November 28, 1999. @item version 21.2.22 released November 29, 1999. @item version 21.2.23 released December 7, 1999. @item version 21.2.24 released December 14, 1999. @item version 21.2.25 released December 24, 1999. @item version 21.2.26 released December 31, 1999. @item version 21.2.27 released January 18, 2000. @item version 21.2.28 released February 7, 2000. @item version 21.2.29 released February 16, 2000. @item version 21.2.30 released February 21, 2000. @item version 21.2.31 released February 23, 2000. @item version 21.2.32 released March 20, 2000. @item version 21.2.33 released May 1, 2000. @item version 21.2.34 released May 28, 2000. @item version 21.2.35 released July 19, 2000. @item version 21.2.36 released October 4, 2000. @item version 21.2.37 released November 14, 2000. @item version 21.2.38 released December 5, 2000. @item version 21.2.39 released December 31, 2000. @item version 21.2.40 released January 8, 2001. @item version 21.2.41 "Polyhymnia" released January 17, 2001. @item version 21.2.42 "Poseidon" released January 20, 2001. @item version 21.2.43 "Terspichore" released January 26, 2001. @item version 21.2.44 "Thalia" released February 8, 2001. @item version 21.2.45 "Thelxepeia" released February 23, 2001. @item version 21.2.46 "Urania" released March 21, 2001. @item version 21.2.47 "Zephir" released April 14, 2001. @item XEmacs 21.4.0 "Solid Vapor" released April 16, 2001. @item XEmacs 21.4.1 "Copyleft" released April 19, 2001. @item XEmacs 21.4.2 "Developer-Friendly Unix APIs" released May 10, 2001. @item XEmacs 21.4.3 "Academic Rigor" released May 17, 2001. @item XEmacs 21.4.4 "Artificial Intelligence" released July 28, 2001. @item XEmacs 21.4.5 "Civil Service" released October 23, 2001. @item XEmacs 21.4.6 "Common Lisp" released December 17, 2001. @item XEmacs 21.4.7 "Economic Science" released May 4, 2002. @item XEmacs 21.4.8 "Honest Recruiter" released May 9, 2002. @item XEmacs 21.4.9 "Informed Management" released August 23, 2002. @item XEmacs 21.4.10 "Military Intelligence" released November 2, 2002. @item XEmacs 21.4.11 "Native Windows TTY Support" released January 3, 2003. @item XEmacs 21.4.12 "Portable Code" released January 15, 2003. @item XEmacs 21.4.13 "Rational FORTRAN" released May 25, 2003. @item XEmacs 21.4.14 "Reasonable Discussion" released September 3, 2003. @item XEmacs 21.4.15 "Security Through Obscurity" released February 2, 2004. @item version 21.5.0 "alfalfa" released April 18, 2001. @item version 21.5.1 "anise" released May 9, 2001. @item version 21.5.2 "artichoke" released July 28, 2001. @item version 21.5.3 "asparagus" released September 7, 2001. @item version 21.5.4 "bamboo" released January 8, 2002. @item version 21.5.5 "beets" released March 5, 2002. @item version 21.5.6 "bok choi" released April 5, 2002. @item version 21.5.7 "broccoflower" released July 2, 2002. @item version 21.5.8 "broccoli" released July 27, 2002. @item version 21.5.9 "brussels sprouts" released August 30, 2002. @item version 21.5.10 "burdock" released January 4, 2003. @item version 21.5.11 "cabbage" released February 16, 2003. @item version 21.5.12 "carrot" released April 24, 2003. @item version 21.5.13 "cauliflower" released May 10, 2003. @item version 21.5.14 "cassava" released June 1, 2003. @item version 21.5.15 "celery" released September 3, 2003. @item version 21.5.16 "celeriac" released September 26, 2003. @item version 21.5.17 "chayote" released March 22, 2004. @item version 21.5.18 "chestnut" released October 22, 2004. @end itemize @node The XEmacs Split, XEmacs from the Outside, A History of Emacs, Top @chapter The XEmacs Split @cindex XEmacs split Author: @uref{mailto:ben@@xemacs.org,Ben Wing} @subheading Ben Wing's attempts @strong{NOTE NOTE NOTE}: The following is a @strong{highly} opinionated piece written by one of the main authors of XEmacs. This reflects his opinions, and his only! It is included here because it may help to clarify some of the issues that are keeping the two versions of Emacs separate. Many people look at the split between GNU Emacs and XEmacs and are convinced that the XEmacs team is being needlessly divisive and just needs to cooperate a bit with RMS, and the two versions of Emacs will merge. In fact there have been six to seven major attempts at merging, each running hundreds of messages long and all of them coming from the XEmacs side. All have failed because they have eventually come to the same conclusion, which is that RMS has no real interest in cooperation at all. If you work with him, you have to do it his way -- "my way or the highway". Specifically: @enumerate @item RMS insists on having legal papers signed for every bit of code that goes into GNU Emacs. RMS's lawyers have told him that every contribution over ten lines long requires legal papers. These papers cannot be filled out over to the web but must be done so in person and mailed to the FSF. Obviously this by itself has a tendency to inhibit contributions because of the hassle factor. Furthermore, many people (and especially organizations) are either hesitant to or refuse to sign legal papers, for reasons mentioned below. Because of these reasons, XEmacs has never enforced legal signed papers for the code in it. Such papers are not a part of the GPL and are not required by any projects other than those of the FSF (for example, Linux does not require such papers). Since we do not know exactly who is the author of every bit of code that has been contributed to XEmacs in the last nine years, we would essentially have to rewrite large sections of the code. The situation however, is worse than that because many of the large copyright holders of XEmacs (for example Sun Microsystems) refuse to sign legal papers. Although they have not stated their reasons, there are quite a number of reasons not to sign legal papers: @itemize @bullet @item By doing so you essentially give up all control over your code. You can no longer release your code under a different license. If you want to use your code that you've contributed to the FSF in a project of your own, and that project is not released under the GPL, you are not allowed to do this. Obviously, large companies tend to want to reuse their code in many different projects and as a result feel very uncomfortable about signing legal papers. @item One of the dangers of assigning copyright to the FSF is that if the FSF happens to be taken over by some evil corporate identity or anyone with different ideas than RMS, they will own all copyright-assigned code, and can revoke the GPL and enforce any license they please. If the code has many different copyright holders, this is much less likely of a scenario. @end itemize @item RMS does not like abstract data structures. Abstract data structures are the foundation of XEmacs and most other modern programming projects. In my opinion, is difficult to impossible to write maintainable and expandable code without using abstract data structures. In merging talks with RMS he has said we can have any abstract data structures we want in a merged version but must allow direct access to the implementation as well, which defeats the primary purpose of having abstract data structures. @item RMS is very unwilling to compromise when it comes to divergent implementations of the same functionality, which is very common between XEmacs and GNU Emacs. Rather than taking the better interface on technical grounds, RMS insists that both interfaces must be implemented in C at the same level (rather than implementing one in C and the other on top if it), so that code that uses either interface is just as fast. This means that the resulting merged Emacs would be filled with a lot of very complicated code to simultaneously support two divergent interfaces, and would be difficult to maintain in this state. @item RMS's idea of compromise and cooperation is almost purely political rather than technical. The XEmacs maintainers would like to have issues resolved by examining them technically and deciding what makes the most sense from a technical prospective. RMS however, wants to proceed on a tit for tat kind of basis, which is to say, “If we support this feature of yours, we also get to support this other feature of mine.” The result of such a process is typically a big mess, because there is no overarching design but instead a great deal of incompatible things hodgepodged together. @end enumerate If only some of the above differences were firmly held by RMS, and if he were willing to compromise effectively on the others and to demonstrate willingness to work with us on the issues that he is less willing to compromise on, we might go ahead with the merge despite misgivings. However RMS has shown no real interest at all in compromising. He has never stated how all of the redundant work that would be required to support his preconditions would get done. It's unlikely that he would do it all and it's certainly not clear that the XEmacs project would be willing to do it all, given that it is a tremendous amount of extra work and the XEmacs project is already strapped for coding resources. (Not to mention the inherent difficulty in convincing people to redo existing work for primarily political reasons.) In general the free software community is quite strapped as a whole for coding resources; duplicative efforts amount to very little positively and have a lot of negative effects in that they take away what few resources we do have from projects that would actually be useful. RMS however, does not seem to be bothered by this. He is more interested in sticking firm to his principles, though the heavens may fall down, than in working forward to create genuinely useful software. It is abundantly clear that RMS has no real interest in unity except if it happens to be on his own terms and allows him ultimate control over the result. He would rather see nothing happen at all than something that is not exactly according to his principles. The fact that few if any people share his principles is meaningless to him. @subheading Jamie Zawinski's attempts In 1991, I was working at Lucid Inc., and our newest product, Energize, was an integrated development environment for C and C++ on Unix. The design of this development environment involved very tight integration between the various tools: compilers, linkers, debuggers, graphers, and editors. So of course we needed a powerful editor to tie the whole thing together, and it was obvious to all of us that there was only one editor that would do: Emacs. At the time, the current version of GNU Emacs from the FSF was Emacs 18. There was another version of GNU Emacs called Epoch, that had been developed at NCSA, which was a set of patches to Emacs 18 that gave it much better GUI support (Emacs 18 was very much a tty program, with GUI support crudely grafted on as an afterthought.) For the last few years, Emacs 19 had been due to be released ``real soon now,'' and was expected to integrate the various features of Epoch in a cleaner way. The Epoch maintainers themselves saw Epoch as an interim measure, awaiting the release of Emacs 19. So, at Lucid we didn't want to tie ourselves to Emacs 18 or on Epoch, because those code bases were considered obsolete by their maintainers. We wanted to use Emacs 19 with our product: the idea was that our product would operate with the off-the-shelf version of Emacs 19, which most people would already have pre-installed on their system anyway. That way, Energize would make use, to some extent, of tools you already had and were already using. The only problem was, Emacs 19 wasn't done yet. So, we decided we could help solve that problem, by providing money and resources to get Emacs 19 finished. Even though Energize was a proprietary, commercial product, all of our work on Emacs (and on GCC and GDB) was released under the GPL. We even assigned the copyright on all of our work back to the FSF, because we had no proprietary interest in Emacs per se: it was just a tool that we wanted to use, and we wanted it to work well, and that was best achieved by making our modifications to it be as freely available as possible. (This was one of the earliest, if not the earliest, example of a commercial product being built to a significant extent out of open source software.) Well, our attempts to help the FSF complete their Emacs 19 project were pretty much a disaster, and we reached the point where we just couldn't wait any longer: we needed to ship our product to customers, and our product needed to have an editor in it. So we bundled up our work on GNU Emacs 19, called it Lucid Emacs, and released it to the world. This incident has become famous as one of the most significant ``forks'' in a free software code base. When Lucid went out of business in 1994, and I came to Netscape, I passed the torch for the maintenance of Lucid Emacs to Chuck Thompson (at NCSA) and Ben Wing (at Sun), who renamed it from ``Lucid Emacs'' to ``XEmacs.'' To this day, XEmacs is as popular as FSFmacs, because it still provides features and a design that many people find superior to the FSF's version. I attribute Lucid Emacs's success to two things, primarily: First, that my focus was on user interface, and an attempt to both make Emacs be a good citizen of modern GUI desktops, and to make it as easy for new users to pick up Emacs as any other GUI editor; Second, that I ran the Lucid Emacs project in a much more open, inclusive way than RMS ran his project. I was not just willing, but eager, to delegate significant and critical pieces of the project to other hackers once they had shown that they knew what they were doing. RMS was basically never willing to do this with anybody. Other things that helped Lucid Emacs's success, but were probably less important than the above: We gave the users what they wanted first. People had been anticipating Emacs 19 for years, and we stopped dragging our feet and finished it. So this got us a lot of users up front. However, XEmacs's current popularity can't be attributed to this, not since 1993, anyway. Lucid Emacs was technically superior in many ways. This won us the mindshare of many good developers, who preferred working with Lucid Emacs to FSF Emacs. It would be nice if technical superiority was all that mattered, but realistically, the other factors were probably more important than this one, as far as number of users is concerned. The following messages, from the Lucid Emacs mailing lists in 1992 and 1993, comprise the bulk (if not the entirety) of the public discussions between the Lucid and FSF camps on why the split happened and why a merger never did. The current XEmacs maintainers have a much more pusillanimous summary of this history on their XEmacs versus GNU Emacs page. -- jwz, 11-Feb-2000. @node XEmacs from the Outside, The Lisp Language, The XEmacs Split, Top @chapter XEmacs from the Outside @cindex XEmacs from the outside @cindex outside, XEmacs from the @cindex read-eval-print XEmacs appears to the outside world as an editor, but it is really a Lisp environment. At its heart is a Lisp interpreter; it also ``happens'' to contain many specialized object types (e.g. buffers, windows, frames, events) that are useful for implementing an editor. Some of these objects (in particular windows and frames) have displayable representations, and XEmacs provides a function @code{redisplay()} that ensures that the display of all such objects matches their internal state. Most of the time, a standard Lisp environment is in a @dfn{read-eval-print} loop---i.e. ``read some Lisp code, execute it, and print the results''. XEmacs has a similar loop: @itemize @bullet @item read an event @item dispatch the event (i.e. ``do it'') @item redisplay @end itemize Reading an event is done using the Lisp function @code{next-event}, which waits for something to happen (typically, the user presses a key or moves the mouse) and returns an event object describing this. Dispatching an event is done using the Lisp function @code{dispatch-event}, which looks up the event in a keymap object (a particular kind of object that associates an event with a Lisp function) and calls that function. The function ``does'' what the user has requested by changing the state of particular frame objects, buffer objects, etc. Finally, @code{redisplay()} is called, which updates the display to reflect those changes just made. Thus is an ``editor'' born. @cindex bridge, playing @cindex taxes, doing @cindex pi, calculating Note that you do not have to use XEmacs as an editor; you could just as well make it do your taxes, compute pi, play bridge, etc. You'd just have to write functions to do those operations in Lisp. @node The Lisp Language, XEmacs from the Perspective of Building, XEmacs from the Outside, Top @chapter The Lisp Language @cindex Lisp language, the @cindex Lisp vs. C @cindex C vs. Lisp @cindex Lisp vs. Java @cindex Java vs. Lisp @cindex dynamic scoping @cindex scoping, dynamic @cindex dynamic types @cindex types, dynamic @cindex Java @cindex Common Lisp @cindex Gosling, James Lisp is a general-purpose language that is higher-level than C and in many ways more powerful than C. Powerful dialects of Lisp such as Common Lisp are probably much better languages for writing very large applications than is C. (Unfortunately, for many non-technical reasons C and its successor C++ have become the dominant languages for application development. These languages are both inadequate for extremely large applications, which is evidenced by the fact that newer, larger programs are becoming ever harder to write and are requiring ever more programmers despite great increases in C development environments; and by the fact that, although hardware speeds and reliability have been growing at an exponential rate, most software is still generally considered to be slow and buggy.) The new Java language holds promise as a better general-purpose development language than C. Java has many features in common with Lisp that are not shared by C (this is not a coincidence, since Java was designed by James Gosling, a former Lisp hacker). This will be discussed more later. For those used to C, here is a summary of the basic differences between C and Lisp: @enumerate @item Lisp has an extremely regular syntax. Every function, expression, and control statement is written in the form @example (@var{func} @var{arg1} @var{arg2} ...) @end example This is as opposed to C, which writes functions as @example func(@var{arg1}, @var{arg2}, ...) @end example but writes expressions involving operators as (e.g.) @example @var{arg1} + @var{arg2} @end example and writes control statements as (e.g.) @example while (@var{expr}) @{ @var{statement1}; @var{statement2}; ... @} @end example Lisp equivalents of the latter two would be @example (+ @var{arg1} @var{arg2} ...) @end example and @example (while @var{expr} @var{statement1} @var{statement2} ...) @end example @item Lisp is a safe language. Assuming there are no bugs in the Lisp interpreter/compiler, it is impossible to write a program that ``core dumps'' or otherwise causes the machine to execute an illegal instruction. This is very different from C, where perhaps the most common outcome of a bug is exactly such a crash. A corollary of this is that the C operation of casting a pointer is impossible (and unnecessary) in Lisp, and that it is impossible to access memory outside the bounds of an array. @item Programs and data are written in the same form. The parenthesis-enclosing form described above for statements is the same form used for the most common data type in Lisp, the list. Thus, it is possible to represent any Lisp program using Lisp data types, and for one program to construct Lisp statements and then dynamically @dfn{evaluate} them, or cause them to execute. @item All objects are @dfn{dynamically typed}. This means that part of every object is an indication of what type it is. A Lisp program can manipulate an object without knowing what type it is, and can query an object to determine its type. This means that, correspondingly, variables and function parameters can hold objects of any type and are not normally declared as being of any particular type. This is opposed to the @dfn{static typing} of C, where variables can hold exactly one type of object and must be declared as such, and objects do not contain an indication of their type because it's implicit in the variables they are stored in. It is possible in C to have a variable hold different types of objects (e.g. through the use of @code{void *} pointers or variable-argument functions), but the type information must then be passed explicitly in some other fashion, leading to additional program complexity. @item Allocated memory is automatically reclaimed when it is no longer in use. This operation is called @dfn{garbage collection} and involves looking through all variables to see what memory is being pointed to, and reclaiming any memory that is not pointed to and is thus ``inaccessible'' and out of use. This is as opposed to C, in which allocated memory must be explicitly reclaimed using @code{free()}. If you simply drop all pointers to memory without freeing it, it becomes ``leaked'' memory that still takes up space. Over a long period of time, this can cause your program to grow and grow until it runs out of memory. @item Lisp has built-in facilities for handling errors and exceptions. In C, when an error occurs, usually either the program exits entirely or the routine in which the error occurs returns a value indicating this. If an error occurs in a deeply-nested routine, then every routine currently called must unwind itself normally and return an error value back up to the next routine. This means that every routine must explicitly check for an error in all the routines it calls; if it does not do so, unexpected and often random behavior results. This is an extremely common source of bugs in C programs. An alternative would be to do a non-local exit using @code{longjmp()}, but that is often very dangerous because the routines that were exited past had no opportunity to clean up after themselves and may leave things in an inconsistent state, causing a crash shortly afterwards. Lisp provides mechanisms to make such non-local exits safe. When an error occurs, a routine simply signals that an error of a particular class has occurred, and a non-local exit takes place. Any routine can trap errors occurring in routines it calls by registering an error handler for some or all classes of errors. (If no handler is registered, a default handler, generally installed by the top-level event loop, is executed; this prints out the error and continues.) Routines can also specify cleanup code (called an @dfn{unwind-protect}) that will be called when control exits from a block of code, no matter how that exit occurs---i.e. even if a function deeply nested below it causes a non-local exit back to the top level. Note that this facility has appeared in some recent vintages of C, in particular Visual C++ and other PC compilers written for the Microsoft Win32 API. @item In Emacs Lisp, local variables are @dfn{dynamically scoped}. This means that if you declare a local variable in a particular function, and then call another function, that subfunction can ``see'' the local variable you declared. This is actually considered a bug in Emacs Lisp and in all other early dialects of Lisp, and was corrected in Common Lisp. (In Common Lisp, you can still declare dynamically scoped variables if you want to---they are sometimes useful---but variables by default are @dfn{lexically scoped} as in C.) @end enumerate For those familiar with Lisp, Emacs Lisp is modelled after MacLisp, an early dialect of Lisp developed at MIT (no relation to the Macintosh computer). There is a Common Lisp compatibility package available for Emacs that provides many of the features of Common Lisp. The Java language is derived in many ways from C, and shares a similar syntax, but has the following features in common with Lisp (and different from C): @enumerate @item Java is a safe language, like Lisp. @item Java provides garbage collection, like Lisp. @item Java has built-in facilities for handling errors and exceptions, like Lisp. @item Java has a type system that combines the best advantages of both static and dynamic typing. Objects (except very simple types) are explicitly marked with their type, as in dynamic typing; but there is a hierarchy of types and functions are declared to accept only certain types, thus providing the increased compile-time error-checking of static typing. @end enumerate The Java language also has some negative attributes: @enumerate @item Java uses the edit/compile/run model of software development. This makes it hard to use interactively. For example, to use Java like @code{bc} it is necessary to write a special purpose, albeit tiny, application. In Emacs Lisp, a calculator comes built-in without any effort - one can always just type an expression in the @code{*scratch*} buffer. @item Java tries too hard to enforce, not merely enable, portability, making ordinary access to standard OS facilities painful. Java has an @dfn{agenda}. I think this is why @code{chdir} is not part of standard Java, which is inexcusable. @end enumerate Unfortunately, there is no perfect language. Static typing allows a compiler to catch programmer errors and produce more efficient code, but makes programming more tedious and less fun. For the foreseeable future, an Ideal Editing and Programming Environment (and that is what XEmacs aspires to) will be programmable in multiple languages: high level ones like Lisp for user customization and prototyping, and lower level ones for infrastructure and industrial strength applications. If I had my way, XEmacs would be friendly towards the Python, Scheme, C++, ML, etc... communities. But there are serious technical difficulties to achieving that goal. The word @dfn{application} in the previous paragraph was used intentionally. XEmacs implements an API for programs written in Lisp that makes it a full-fledged application platform, very much like an OS inside the real OS. @node XEmacs from the Perspective of Building, Build-Time Dependencies, The Lisp Language, Top @chapter XEmacs from the Perspective of Building @cindex XEmacs from the perspective of building @cindex building, XEmacs from the perspective of The heart of XEmacs is the Lisp environment, which is written in C. This is contained in the @file{src/} subdirectory. Underneath @file{src/} are two subdirectories of header files: @file{s/} (header files for particular operating systems) and @file{m/} (header files for particular machine types). In practice the distinction between the two types of header files is blurred. These header files define or undefine certain preprocessor constants and macros to indicate particular characteristics of the associated machine or operating system. As part of the configure process, one @file{s/} file and one @file{m/} file is identified for the particular environment in which XEmacs is being built. XEmacs also contains a great deal of Lisp code. This implements the operations that make XEmacs useful as an editor as well as just a Lisp environment, and also contains many add-on packages that allow XEmacs to browse directories, act as a mail and Usenet news reader, compile Lisp code, etc. There is actually more Lisp code than C code associated with XEmacs, but much of the Lisp code is peripheral to the actual operation of the editor. The Lisp code all lies in subdirectories underneath the @file{lisp/} directory. The @file{lwlib/} directory contains C code that implements a generalized interface onto different X widget toolkits and also implements some widgets of its own that behave like Motif widgets but are faster, free, and in some cases more powerful. The code in this directory compiles into a library and is mostly independent from XEmacs. The @file{etc/} directory contains various data files associated with XEmacs. Some of them are actually read by XEmacs at startup; others merely contain useful information of various sorts. The @file{lib-src/} directory contains C code for various auxiliary programs that are used in connection with XEmacs. Some of them are used during the build process; others are used to perform certain functions that cannot conveniently be placed in the XEmacs executable (e.g. the @file{movemail} program for fetching mail out of @file{/var/spool/mail}, which must be setgid to @file{mail} on many systems; and the @file{gnuclient} program, which allows an external script to communicate with a running XEmacs process). The @file{man/} directory contains the sources for the XEmacs documentation. It is mostly in a form called Texinfo, which can be converted into either a printed document (by passing it through @TeX{}) or into on-line documentation called @dfn{info files}. The @file{info/} directory contains the results of formatting the XEmacs documentation as @dfn{info files}, for on-line use. These files are used when you enter the Info system using @kbd{C-h i} or through the Help menu. The @file{dynodump/} directory contains auxiliary code used to build XEmacs on Solaris platforms. The other directories contain various miscellaneous code and information that is not normally used or needed. The first step of building involves running the @file{configure} program and passing it various parameters to specify any optional features you want and compiler arguments and such, as described in the @file{INSTALL} file. This determines what the build environment is, chooses the appropriate @file{s/} and @file{m/} file, and runs a series of tests to determine many details about your environment, such as which library functions are available and exactly how they work. The reason for running these tests is that it allows XEmacs to be compiled on a much wider variety of platforms than those that the XEmacs developers happen to be familiar with, including various sorts of hybrid platforms. This is especially important now that many operating systems give you a great deal of control over exactly what features you want installed, and allow for easy upgrading of parts of a system without upgrading the rest. It would be impossible to pre-determine and pre-specify the information for all possible configurations. In fact, the @file{s/} and @file{m/} files are basically @emph{evil}, since they contain unmaintainable platform-specific hard-coded information. XEmacs has been moving in the direction of having all system-specific information be determined dynamically by @file{configure}. Perhaps someday we can @code{rm -rf src/s src/m}. When configure is done running, it generates @file{Makefile}s and @file{GNUmakefile}s and the file @file{src/config.h} (which describes the features of your system) from template files. You then run @file{make}, which compiles the auxiliary code and programs in @file{lib-src/} and @file{lwlib/} and the main XEmacs executable in @file{src/}. The result of compiling and linking is an executable called @file{temacs}, which is @emph{not} the final XEmacs executable. @file{temacs} by itself is not intended to function as an editor or even display any windows on the screen, and if you simply run it, it will exit immediately. The @file{Makefile} runs @file{temacs} with certain options that cause it to initialize itself, read in a number of basic Lisp files, and then dump itself out into a new executable called @file{xemacs}. This new executable has been pre-initialized and contains pre-digested Lisp code that is necessary for the editor to function (this includes most basic editing functions, e.g. @code{kill-line}, that can be defined in terms of other Lisp primitives; some initialization code that is called when certain objects, such as frames, are created; and all of the standard keybindings and code for the actions they result in). This executable, @file{xemacs}, is the executable that you run to use the XEmacs editor. Although @file{temacs} is not intended to be run as an editor, it can, by using the incantation @code{temacs -batch -l loadup.el run-temacs}. This is useful when the dumping procedure described above is broken, or when using certain program debugging tools such as Purify. These tools get mighty confused by the tricks played by the XEmacs build process, such as allocating memory in one process, and freeing it in the next. @node Build-Time Dependencies, The Modules of XEmacs, XEmacs from the Perspective of Building, Top @chapter Build-Time Dependencies @cindex build-time dependencies @cindex dependencies, build-time This is a collection of random notes on build-time dependencies as of about XEmacs 21.5.11. Of course we use @file{make} to manage most dependencies, especially for the C code. The main thing here is for the Release Engineer to run the @file{src/make-src-depend} script every so often, at least at every release. However, since most of XEmacs is written in Lisp, and we compile and preload the Lisp for efficiency, managing Lisp compilation using @file{make} would imply running XEmacs hundreds of times. This would make the build process unbearably long. Thus those processes that require running the same Lisp programs on many files are managed using Lisp driver functions rather than @file{make}. The situation is further complicated by the fact that documentation strings are kept in an external database, and referenced in the dumped XEmacs by file offset. Finally, the Lisp files are processed to collect autoloaded function information and customize dependencies, which are then written into generated Lisp files. About this, Ben sez: @quotation @enumerate 1 @item Redumping depends on up-to-date dumped @file{.elc} files and @file{DOC} but not directly on auto-autoloads. @item Rebuilding dumped @file{.elc} files depends on auto-autoloads being up-to-date. @item Building the @file{DOC} file depends on up-to-date dumped @file{.elc} files but not directly on auto-autoloads. @item Recompiling anything depends on @file{bytecomp.elc} and @file{byte-optimize.elc} being up-to-date. @end enumerate Put these together and you'll see it's perfectly acceptable to build auto-autoloads @strong{after} dumping if no @file{.elc} files are out-of-date. @end quotation These Lisp driver programs typically run from temacs, not a dumped XEmacs. The simplest (but time-consuming) way to achieve a sane environment for running Lisp is to load @file{loadup.el} or @file{loadup-el.el}. (The latter is used to avoid loading possibly out-of-date compiled Lisp files.) If this is not done, you have to construct the environment yourself. See @file{dumped-lisp.el} to see how it is done in the dumped XEmacs. One potential gotcha is that very early customizations are now handled by adding the definitions to the special variable @code{custom-declare-variable-list}, defined in @file{subr.el}. If you use any higher-level functionality that might load @file{custom.el}, but you do not need @file{subr.el}, you should @samp{defvar} @code{custom-declare-variable-list} to prevent the @samp{void-variable} error. (Currently this is only needed for @file{make-docfile.el}.) @node The Modules of XEmacs, Rules When Writing New C Code, Build-Time Dependencies, Top @chapter The Modules of XEmacs @cindex modules of XEmacs @menu * A Summary of the Various XEmacs Modules:: * Low-Level Modules:: * Basic Lisp Modules:: * Modules for Standard Editing Operations:: * Modules for Interfacing with the File System:: * Modules for Other Aspects of the Lisp Interpreter and Object System:: * Modules for Interfacing with the Operating System:: @end menu @node A Summary of the Various XEmacs Modules, Low-Level Modules, The Modules of XEmacs, The Modules of XEmacs @section A Summary of the Various XEmacs Modules @cindex summary of the various XEmacs modules @cindex modules, summary of the various XEmacs The following is a list of the sections describing the various modules (i.e. files) that implement XEmacs. Some of them are in this chapter; some of them are attached to the chapters describing the modules in question. @itemize @bullet @item @ref{Low-Level Modules}. @item @ref{Basic Lisp Modules}. @item @ref{Modules for Standard Editing Operations}. @item @ref{Editor-Level Control Flow Modules}. @item @ref{Modules for the Basic Displayable Lisp Objects}. @item @ref{Modules for other Display-Related Lisp Objects}. @item @ref{Modules for the Redisplay Mechanism}. @item @ref{Modules for Interfacing with the File System}. @item @ref{Modules for Other Aspects of the Lisp Interpreter and Object System}. @item @ref{Modules for Interfacing with the Operating System}. @item @ref{Modules for Interfacing with MS Windows}. @item @ref{Modules for Interfacing with X Windows}. @item @ref{Modules for Internationalization}. @item @ref{Modules for Regression Testing}. @end itemize The following table contains cross-references from each module in XEmacs 21.5 to the section (if any) describing it. @multitable {@file{intl-auto-encap-win32.c}} {@ref{Modules for Other Aspects of the Lisp Interpreter and Object System}} @item @file{Emacs.ad.h} @tab @ref{Modules for Interfacing with X Windows}. @item @file{EmacsFrame.c} @tab @ref{Modules for Interfacing with X Windows}. @item @file{EmacsFrame.h} @tab @ref{Modules for Interfacing with X Windows}. @item @file{EmacsFrameP.h} @tab @ref{Modules for Interfacing with X Windows}. @item @file{EmacsManager.c} @tab @ref{Modules for Interfacing with X Windows}. @item @file{EmacsManager.h} @tab @ref{Modules for Interfacing with X Windows}. @item @file{EmacsManagerP.h} @tab @ref{Modules for Interfacing with X Windows}. @item @file{EmacsShell-sub.c} @tab @ref{Modules for Interfacing with X Windows}. @item @file{EmacsShell.c} @tab @ref{Modules for Interfacing with X Windows}. @item @file{EmacsShell.h} @tab @ref{Modules for Interfacing with X Windows}. @item @file{EmacsShellP.h} @tab @ref{Modules for Interfacing with X Windows}. @item @file{ExternalClient-Xlib.c} @tab @ref{Modules for Interfacing with X Windows}. @item @file{ExternalClient.c} @tab @ref{Modules for Interfacing with X Windows}. @item @file{ExternalClient.h} @tab @ref{Modules for Interfacing with X Windows}. @item @file{ExternalClientP.h} @tab @ref{Modules for Interfacing with X Windows}. @item @file{ExternalShell.c} @tab @ref{Modules for Interfacing with X Windows}. @item @file{ExternalShell.h} @tab @ref{Modules for Interfacing with X Windows}. @item @file{ExternalShellP.h} @tab @ref{Modules for Interfacing with X Windows}. @item @file{Makefile.in.in} @tab @item @file{abbrev.c} @tab @ref{Modules for Other Aspects of the Lisp Interpreter and Object System}. @item @file{alloc.c} @tab @ref{Basic Lisp Modules}. @item @file{alloca.c} @tab @ref{Low-Level Modules}. @item @file{alloca.s} @tab @item @file{backtrace.h} @tab @ref{Basic Lisp Modules}. @item @file{balloon-x.c} @tab @item @file{balloon_help.c} @tab @item @file{balloon_help.h} @tab @item @file{base64-tests.el} @tab @ref{Modules for Regression Testing}. @item @file{bitmaps.h} @tab @ref{Modules for other Display-Related Lisp Objects}. @item @file{blocktype.c} @tab @ref{Low-Level Modules}. @item @file{blocktype.h} @tab @ref{Low-Level Modules}. @item @file{broken-sun.h} @tab @ref{Modules for Interfacing with the Operating System}. @item @file{buffer.c} @tab @ref{Modules for Standard Editing Operations}. @item @file{buffer.h} @tab @ref{Modules for Standard Editing Operations}. @item @file{bufslots.h} @tab @ref{Modules for Standard Editing Operations}. @item @file{byte-compiler-tests.el} @tab @ref{Modules for Regression Testing}. @item @file{bytecode.c} @tab @ref{Basic Lisp Modules}. @item @file{bytecode.h} @tab @ref{Basic Lisp Modules}. @item @file{c-tests.el} @tab @ref{Modules for Regression Testing}. @item @file{callint.c} @tab @ref{Modules for Standard Editing Operations}. @item @file{case-tests.el} @tab @ref{Modules for Regression Testing}. @item @file{casefiddle.c} @tab @ref{Modules for Other Aspects of the Lisp Interpreter and Object System}. @item @file{casetab.c} @tab @ref{Modules for Other Aspects of the Lisp Interpreter and Object System}. @item @file{casetab.h} @tab @item @file{ccl-tests.el} @tab @ref{Modules for Regression Testing}. @item @file{charset.h} @tab @item @file{chartab.c} @tab @ref{Modules for Other Aspects of the Lisp Interpreter and Object System}. @item @file{chartab.h} @tab @ref{Modules for Other Aspects of the Lisp Interpreter and Object System}. @item @file{cm.c} @tab @ref{Modules for the Redisplay Mechanism}. @item @file{cm.h} @tab @ref{Modules for the Redisplay Mechanism}. @item @file{cmdloop.c} @tab @ref{Editor-Level Control Flow Modules}. @item @file{cmds.c} @tab @ref{Modules for Standard Editing Operations}. @item @file{coding-system-slots.h} @tab @item @file{commands.h} @tab @ref{Modules for Standard Editing Operations}. @item @file{compiler.h} @tab @item @file{config.h.in} @tab @item @file{config.h} @tab @ref{Low-Level Modules}. @item @file{conslots.h} @tab @item @file{console-gtk-impl.h} @tab @item @file{console-gtk.c} @tab @item @file{console-gtk.h} @tab @item @file{console-impl.h} @tab @item @file{console-msw-impl.h} @tab @item @file{console-msw.c} @tab @ref{Modules for the Basic Displayable Lisp Objects}. @item @file{console-msw.h} @tab @ref{Modules for the Basic Displayable Lisp Objects}. @item @file{console-stream-impl.h} @tab @item @file{console-stream.c} @tab @ref{Modules for the Basic Displayable Lisp Objects}. @item @file{console-stream.h} @tab @ref{Modules for the Basic Displayable Lisp Objects}. @item @file{console-tty-impl.h} @tab @item @file{console-tty.c} @tab @ref{Modules for the Basic Displayable Lisp Objects}. @item @file{console-tty.h} @tab @ref{Modules for the Basic Displayable Lisp Objects}. @item @file{console-x-impl.h} @tab @item @file{console-x.c} @tab @ref{Modules for the Basic Displayable Lisp Objects}. @item @file{console-x.h} @tab @ref{Modules for the Basic Displayable Lisp Objects}. @item @file{console.c} @tab @ref{Modules for the Basic Displayable Lisp Objects}. @item @file{console.h} @tab @ref{Modules for the Basic Displayable Lisp Objects}. @item @file{data.c} @tab @ref{Basic Lisp Modules}. @item @file{database-tests.el} @tab @ref{Modules for Regression Testing}. @item @file{database.c} @tab @item @file{database.h} @tab @item @file{debug.c} @tab @ref{Low-Level Modules}. @item @file{debug.h} @tab @ref{Low-Level Modules}. @item @file{depend} @tab @item @file{device-gtk.c} @tab @item @file{device-impl.h} @tab @item @file{device-msw.c} @tab @ref{Modules for the Basic Displayable Lisp Objects}. @item @file{device-tty.c} @tab @ref{Modules for the Basic Displayable Lisp Objects}. @item @file{device-x.c} @tab @ref{Modules for the Basic Displayable Lisp Objects}. @item @file{device.c} @tab @ref{Modules for the Basic Displayable Lisp Objects}. @item @file{device.h} @tab @ref{Modules for the Basic Displayable Lisp Objects}. @item @file{devslots.h} @tab @item @file{dgif_lib.c} @tab @ref{Modules for other Display-Related Lisp Objects}. @item @file{dialog-gtk.c} @tab @item @file{dialog-msw.c} @tab @item @file{dialog-x.c} @tab @item @file{dialog.c} @tab @item @file{dired-msw.c} @tab @item @file{dired.c} @tab @ref{Modules for Interfacing with the File System}. @item @file{doc.c} @tab @ref{Modules for Other Aspects of the Lisp Interpreter and Object System}. @item @file{doprnt.c} @tab @ref{Modules for Standard Editing Operations}. @item @file{dragdrop.c} @tab @item @file{dragdrop.h} @tab @item @file{dump-data.c} @tab @item @file{dump-data.h} @tab @item @file{dump-id.c} @tab @item @file{dumper.c} @tab @item @file{dumper.h} @tab @item @file{dynarr.c} @tab @ref{Low-Level Modules}. @item @file{ecrt0.c} @tab @ref{Low-Level Modules}. @item @file{editfns.c} @tab @ref{Modules for Standard Editing Operations}. @item @file{elhash.c} @tab @ref{Modules for Other Aspects of the Lisp Interpreter and Object System}. @item @file{elhash.h} @tab @ref{Modules for Other Aspects of the Lisp Interpreter and Object System}. @item @file{emacs-marshals.c} @tab @item @file{emacs-new.c.old} @tab @item @file{emacs-widget-accessors.c} @tab @item @file{emacs.c} @tab @ref{Low-Level Modules}. @item @file{emodules.c} @tab @item @file{emodules.h} @tab @item @file{esd.c} @tab @item @file{eval.c} @tab @ref{Basic Lisp Modules}. @item @file{event-Xt.c} @tab @ref{Editor-Level Control Flow Modules}. @item @file{event-gtk.c} @tab @item @file{event-gtk.h} @tab @item @file{event-msw.c} @tab @ref{Editor-Level Control Flow Modules}. @item @file{event-stream.c} @tab @ref{Editor-Level Control Flow Modules}. @item @file{event-tty.c} @tab @ref{Editor-Level Control Flow Modules}. @item @file{event-unixoid.c} @tab @item @file{event-xlike-inc.c} @tab @item @file{events-mod.h} @tab @ref{Editor-Level Control Flow Modules}. @item @file{events.c} @tab @ref{Editor-Level Control Flow Modules}. @item @file{events.h} @tab @ref{Editor-Level Control Flow Modules}. @item @file{extent-tests.el} @tab @ref{Modules for Regression Testing}. @item @file{extents-impl.h} @tab @item @file{extents.c} @tab @ref{Modules for Standard Editing Operations}. @item @file{extents.h} @tab @ref{Modules for Standard Editing Operations}. @item @file{extw-Xlib.c} @tab @ref{Modules for Interfacing with X Windows}. @item @file{extw-Xlib.h} @tab @ref{Modules for Interfacing with X Windows}. @item @file{extw-Xt.c} @tab @ref{Modules for Interfacing with X Windows}. @item @file{extw-Xt.h} @tab @ref{Modules for Interfacing with X Windows}. @item @file{faces.c} @tab @ref{Modules for other Display-Related Lisp Objects}. @item @file{faces.h} @tab @ref{Modules for other Display-Related Lisp Objects}. @item @file{file-coding.c} @tab @ref{Modules for Internationalization}. @item @file{file-coding.h} @tab @ref{Modules for Internationalization}. @item @file{fileio.c} @tab @ref{Modules for Interfacing with the File System}. @item @file{filelock.c} @tab @ref{Modules for Interfacing with the File System}. @item @file{filemode.c} @tab @ref{Modules for Interfacing with the File System}. @item @file{floatfns.c} @tab @ref{Basic Lisp Modules}. @item @file{fns.c} @tab @ref{Basic Lisp Modules}. @item @file{font-lock.c} @tab @ref{Modules for other Display-Related Lisp Objects}. @item @file{frame-gtk.c} @tab @item @file{frame-impl.h} @tab @item @file{frame-msw.c} @tab @ref{Modules for the Basic Displayable Lisp Objects}. @item @file{frame-tty.c} @tab @ref{Modules for the Basic Displayable Lisp Objects}. @item @file{frame-x.c} @tab @ref{Modules for the Basic Displayable Lisp Objects}. @item @file{frame.c} @tab @ref{Modules for the Basic Displayable Lisp Objects}. @item @file{frame.diff} @tab @item @file{frame.h} @tab @ref{Modules for the Basic Displayable Lisp Objects}. @item @file{frameslots.h} @tab @item @file{free-hook.c} @tab @ref{Low-Level Modules}. @item @file{gccache-gtk.c} @tab @item @file{gccache-gtk.h} @tab @item @file{general-slots.h} @tab @item @file{general.c} @tab @ref{Basic Lisp Modules}. @item @file{getloadavg.c} @tab @ref{Modules for Interfacing with the Operating System}. @item @file{getpagesize.h} @tab @ref{Low-Level Modules}. @item @file{gif_err.c} @tab @ref{Modules for other Display-Related Lisp Objects}. @item @file{gif_io.c} @tab @item @file{gif_lib.h} @tab @ref{Modules for other Display-Related Lisp Objects}. @item @file{gifalloc.c} @tab @ref{Modules for other Display-Related Lisp Objects}. @item @file{gifrlib.h} @tab @item @file{glade.c} @tab @item @file{glyphs-eimage.c} @tab @ref{Modules for other Display-Related Lisp Objects}. @item @file{glyphs-gtk.c} @tab @item @file{glyphs-gtk.h} @tab @item @file{glyphs-msw.c} @tab @ref{Modules for other Display-Related Lisp Objects}. @item @file{glyphs-msw.h} @tab @ref{Modules for other Display-Related Lisp Objects}. @item @file{glyphs-shared.c} @tab @item @file{glyphs-widget.c} @tab @ref{Modules for other Display-Related Lisp Objects}. @item @file{glyphs-x.c} @tab @ref{Modules for other Display-Related Lisp Objects}. @item @file{glyphs-x.h} @tab @ref{Modules for other Display-Related Lisp Objects}. @item @file{glyphs.c} @tab @ref{Modules for other Display-Related Lisp Objects}. @item @file{glyphs.h} @tab @ref{Modules for other Display-Related Lisp Objects}. @item @file{gmalloc.c} @tab @ref{Low-Level Modules}. @item @file{gpmevent.c} @tab @ref{Editor-Level Control Flow Modules}. @item @file{gpmevent.h} @tab @ref{Editor-Level Control Flow Modules}. @item @file{gtk-glue.c} @tab @item @file{gtk-xemacs.c} @tab @item @file{gtk-xemacs.h} @tab @item @file{gui-gtk.c} @tab @item @file{gui-msw.c} @tab @item @file{gui-x.c} @tab @item @file{gui.c} @tab @item @file{gui.h} @tab @item @file{gutter.c} @tab @item @file{gutter.h} @tab @item @file{hash-table-tests.el} @tab @ref{Modules for Regression Testing}. @item @file{hash.c} @tab @ref{Modules for Other Aspects of the Lisp Interpreter and Object System}. @item @file{hash.h} @tab @ref{Modules for Other Aspects of the Lisp Interpreter and Object System}. @item @file{hftctl.c} @tab @ref{Modules for Interfacing with the Operating System}. @item @file{hpplay.c} @tab @ref{Modules for Interfacing with the Operating System}. @item @file{imgproc.c} @tab @item @file{imgproc.h} @tab @item @file{indent.c} @tab @ref{Modules for the Redisplay Mechanism}. @item @file{inline.c} @tab @ref{Low-Level Modules}. @item @file{input-method-motif.c} @tab @item @file{input-method-xlib.c} @tab @item @file{insdel.c} @tab @ref{Modules for Standard Editing Operations}. @item @file{insdel.h} @tab @ref{Modules for Standard Editing Operations}. @item @file{intl-auto-encap-win32.c} @tab @item @file{intl-auto-encap-win32.h} @tab @item @file{intl-encap-win32.c} @tab @item @file{intl-win32.c} @tab @item @file{intl-x.c} @tab @item @file{intl.c} @tab @ref{Modules for Internationalization}. @item @file{iso-wide.h} @tab @ref{Modules for Internationalization}. @item @file{keymap.c} @tab @ref{Editor-Level Control Flow Modules}. @item @file{keymap.h} @tab @ref{Editor-Level Control Flow Modules}. @item @file{lastfile.c} @tab @ref{Low-Level Modules}. @item @file{libinterface.c} @tab @item @file{libinterface.h} @tab @item @file{libsst.c} @tab @ref{Modules for Interfacing with the Operating System}. @item @file{libsst.h} @tab @ref{Modules for Interfacing with the Operating System}. @item @file{libst.h} @tab @ref{Modules for Interfacing with the Operating System}. @item @file{line-number.c} @tab @item @file{line-number.h} @tab @item @file{linuxplay.c} @tab @ref{Modules for Interfacing with the Operating System}. @item @file{lisp-disunion.h} @tab @ref{Basic Lisp Modules}. @item @file{lisp-tests.el} @tab @ref{Modules for Regression Testing}. @item @file{lisp-union.h} @tab @ref{Basic Lisp Modules}. @item @file{lisp.h} @tab @ref{Basic Lisp Modules}. @item @file{lread.c} @tab @ref{Basic Lisp Modules}. @item @file{lrecord.h} @tab @ref{Basic Lisp Modules}. @item @file{lstream.c} @tab @ref{Modules for Interfacing with the File System}. @item @file{lstream.h} @tab @ref{Modules for Interfacing with the File System}. @item @file{macros.c} @tab @ref{Editor-Level Control Flow Modules}. @item @file{macros.h} @tab @ref{Editor-Level Control Flow Modules}. @item @file{make-src-depend} @tab @item @file{malloc.c} @tab @ref{Low-Level Modules}. @item @file{marker.c} @tab @ref{Modules for Standard Editing Operations}. @item @file{md5-tests.el} @tab @ref{Modules for Regression Testing}. @item @file{md5.c} @tab @ref{Modules for Other Aspects of the Lisp Interpreter and Object System}. @item @file{mem-limits.h} @tab @ref{Low-Level Modules}. @item @file{menubar-gtk.c} @tab @item @file{menubar-msw.c} @tab @ref{Modules for other Display-Related Lisp Objects}. @item @file{menubar-msw.h} @tab @ref{Modules for other Display-Related Lisp Objects}. @item @file{menubar-x.c} @tab @ref{Modules for other Display-Related Lisp Objects}. @item @file{menubar.c} @tab @ref{Modules for other Display-Related Lisp Objects}. @item @file{menubar.h} @tab @ref{Modules for other Display-Related Lisp Objects}. @item @file{minibuf.c} @tab @ref{Editor-Level Control Flow Modules}. @item @file{miscplay.c} @tab @item @file{miscplay.h} @tab @item @file{mule-canna.c} @tab @ref{Modules for Internationalization}. @item @file{mule-ccl.c} @tab @ref{Modules for Internationalization}. @item @file{mule-ccl.h} @tab @item @file{mule-charset.c} @tab @ref{Modules for Internationalization}. @item @file{mule-charset.h} @tab @ref{Modules for Internationalization}. @item @file{mule-coding.c} @tab @ref{Modules for Internationalization}. @item @file{mule-mcpath.c} @tab @ref{Modules for Internationalization}. @item @file{mule-mcpath.h} @tab @ref{Modules for Internationalization}. @item @file{mule-tests.el} @tab @ref{Modules for Regression Testing}. @item @file{mule-wnnfns.c} @tab @ref{Modules for Internationalization}. @item @file{mule.c} @tab @ref{Modules for Internationalization}. @item @file{nas.c} @tab @ref{Modules for Interfacing with the Operating System}. @item @file{native-gtk-toolbar.c} @tab @item @file{ndir.h} @tab @ref{Modules for Interfacing with the File System}. @item @file{nsselect.m} @tab @item @file{nt.c} @tab @item @file{ntheap.c} @tab @item @file{ntplay.c} @tab @item @file{number-gmp.c} @tab @item @file{number-gmp.h} @tab @item @file{number-mp.c} @tab @item @file{number-mp.h} @tab @item @file{number.c} @tab @item @file{number.h} @tab @item @file{objects-gtk-impl.h} @tab @item @file{objects-gtk.c} @tab @item @file{objects-gtk.h} @tab @item @file{objects-impl.h} @tab @item @file{objects-msw-impl.h} @tab @item @file{objects-msw.c} @tab @ref{Modules for other Display-Related Lisp Objects}. @item @file{objects-msw.h} @tab @ref{Modules for other Display-Related Lisp Objects}. @item @file{objects-tty-impl.h} @tab @item @file{objects-tty.c} @tab @ref{Modules for other Display-Related Lisp Objects}. @item @file{objects-tty.h} @tab @ref{Modules for other Display-Related Lisp Objects}. @item @file{objects-x-impl.h} @tab @item @file{objects-x.c} @tab @ref{Modules for other Display-Related Lisp Objects}. @item @file{objects-x.h} @tab @ref{Modules for other Display-Related Lisp Objects}. @item @file{objects.c} @tab @ref{Modules for other Display-Related Lisp Objects}. @item @file{objects.h} @tab @ref{Modules for other Display-Related Lisp Objects}. @item @file{offix-cursors.h} @tab @item @file{offix-types.h} @tab @item @file{offix.c} @tab @item @file{offix.h} @tab @item @file{opaque.c} @tab @ref{Modules for Other Aspects of the Lisp Interpreter and Object System}. @item @file{opaque.h} @tab @ref{Modules for Other Aspects of the Lisp Interpreter and Object System}. @item @file{paths.h.in} @tab @item @file{paths.h} @tab @ref{Low-Level Modules}. @item @file{ppc.ldscript} @tab @item @file{pre-crt0.c} @tab @ref{Low-Level Modules}. @item @file{print.c} @tab @ref{Basic Lisp Modules}. @item @file{process-nt.c} @tab @item @file{process-slots.h} @tab @item @file{process-unix.c} @tab @item @file{process.c} @tab @ref{Modules for Interfacing with the Operating System}. @item @file{process.el} @tab @ref{Modules for Interfacing with the Operating System}. @item @file{process.h} @tab @ref{Modules for Interfacing with the Operating System}. @item @file{procimpl.h} @tab @item @file{profile.c.orig} @tab @item @file{profile.c.rej} @tab @item @file{profile.c} @tab @item @file{profile.h} @tab @item @file{ralloc.c} @tab @ref{Low-Level Modules}. @item @file{rangetab.c} @tab @ref{Modules for Other Aspects of the Lisp Interpreter and Object System}. @item @file{rangetab.h} @tab @item @file{realpath.c} @tab @ref{Modules for Interfacing with the File System}. @item @file{redisplay-gtk.c} @tab @item @file{redisplay-msw.c} @tab @ref{Modules for the Redisplay Mechanism}. @item @file{redisplay-output.c} @tab @ref{Modules for the Redisplay Mechanism}. @item @file{redisplay-tty.c} @tab @ref{Modules for the Redisplay Mechanism}. @item @file{redisplay-x.c} @tab @ref{Modules for the Redisplay Mechanism}. @item @file{redisplay.c} @tab @ref{Modules for the Redisplay Mechanism}. @item @file{redisplay.h} @tab @ref{Modules for the Redisplay Mechanism}. @item @file{regex.c} @tab @ref{Modules for Standard Editing Operations}. @item @file{regex.h} @tab @ref{Modules for Standard Editing Operations}. @item @file{regexp-tests.el} @tab @ref{Modules for Regression Testing}. @item @file{scrollbar-gtk.c} @tab @item @file{scrollbar-gtk.h} @tab @item @file{scrollbar-msw.c} @tab @ref{Modules for other Display-Related Lisp Objects}. @item @file{scrollbar-msw.h} @tab @ref{Modules for other Display-Related Lisp Objects}. @item @file{scrollbar-x.c} @tab @ref{Modules for other Display-Related Lisp Objects}. @item @file{scrollbar-x.h} @tab @ref{Modules for other Display-Related Lisp Objects}. @item @file{scrollbar.c} @tab @ref{Modules for other Display-Related Lisp Objects}. @item @file{scrollbar.h} @tab @ref{Modules for other Display-Related Lisp Objects}. @item @file{search.c} @tab @ref{Modules for Standard Editing Operations}. @item @file{select-common.h} @tab @item @file{select-gtk.c} @tab @item @file{select-msw.c} @tab @ref{Modules for Interfacing with X Windows}. @item @file{select-x.c} @tab @ref{Modules for Interfacing with X Windows}. @item @file{select.c} @tab @ref{Modules for Interfacing with X Windows}. @item @file{select.h} @tab @ref{Modules for Interfacing with X Windows}. @item @file{sgiplay.c} @tab @ref{Modules for Interfacing with the Operating System}. @item @file{sheap.c} @tab @item @file{signal.c} @tab @ref{Low-Level Modules}. @item @file{sound.c} @tab @ref{Modules for Interfacing with the Operating System}. @item @file{sound.h} @tab @item @file{specifier.c} @tab @ref{Modules for Other Aspects of the Lisp Interpreter and Object System}. @item @file{specifier.h} @tab @ref{Modules for Other Aspects of the Lisp Interpreter and Object System}. @item @file{src-headers} @tab @item @file{strcat.c} @tab @item @file{strcmp.c} @tab @ref{Modules for Interfacing with the Operating System}. @item @file{strcpy.c} @tab @ref{Modules for Interfacing with the Operating System}. @item @file{strftime.c} @tab @item @file{sunOS-fix.c} @tab @ref{Modules for Interfacing with the Operating System}. @item @file{sunplay.c} @tab @ref{Modules for Interfacing with the Operating System}. @item @file{sunpro.c} @tab @ref{Modules for Interfacing with the Operating System}. @item @file{symbol-tests.el} @tab @ref{Modules for Regression Testing}. @item @file{symbols.c} @tab @ref{Basic Lisp Modules}. @item @file{symeval.h} @tab @ref{Basic Lisp Modules}. @item @file{symsinit.h} @tab @ref{Basic Lisp Modules}. @item @file{syntax-tests.el} @tab @ref{Modules for Regression Testing}. @item @file{syntax.c} @tab @ref{Modules for Other Aspects of the Lisp Interpreter and Object System}. @item @file{syntax.h} @tab @ref{Modules for Other Aspects of the Lisp Interpreter and Object System}. @item @file{sysdep.c} @tab @ref{Modules for Interfacing with the Operating System}. @item @file{sysdep.h} @tab @ref{Modules for Interfacing with the Operating System}. @item @file{sysdir.h} @tab @ref{Modules for Interfacing with the Operating System}. @item @file{sysdll.c} @tab @item @file{sysdll.h} @tab @item @file{sysfile.h} @tab @ref{Modules for Interfacing with the Operating System}. @item @file{sysfloat.h} @tab @ref{Modules for Interfacing with the Operating System}. @item @file{sysproc.h} @tab @ref{Modules for Interfacing with the Operating System}. @item @file{syspwd.h} @tab @ref{Modules for Interfacing with the Operating System}. @item @file{syssignal.h} @tab @ref{Modules for Interfacing with the Operating System}. @item @file{systime.h} @tab @ref{Modules for Interfacing with the Operating System}. @item @file{systty.h} @tab @ref{Modules for Interfacing with the Operating System}. @item @file{syswait.h} @tab @ref{Modules for Interfacing with the Operating System}. @item @file{syswindows.h} @tab @item @file{tag-tests.el} @tab @ref{Modules for Regression Testing}. @item @file{termcap.c} @tab @ref{Modules for the Redisplay Mechanism}. @item @file{terminfo.c} @tab @ref{Modules for the Redisplay Mechanism}. @item @file{test-harness.el} @tab @ref{Modules for Regression Testing}. @item @file{tests.c} @tab @item @file{text.c} @tab @item @file{text.h} @tab @item @file{toolbar-common.c} @tab @item @file{toolbar-common.h} @tab @item @file{toolbar-gtk.c} @tab @item @file{toolbar-msw.c} @tab @ref{Modules for other Display-Related Lisp Objects}. @item @file{toolbar-x.c} @tab @ref{Modules for other Display-Related Lisp Objects}. @item @file{toolbar.c} @tab @ref{Modules for other Display-Related Lisp Objects}. @item @file{toolbar.h} @tab @ref{Modules for other Display-Related Lisp Objects}. @item @file{tooltalk.c} @tab @ref{Modules for Interfacing with the Operating System}. @item @file{tooltalk.h} @tab @ref{Modules for Interfacing with the Operating System}. @item @file{tparam.c} @tab @ref{Modules for the Redisplay Mechanism}. @item @file{ui-byhand.c} @tab @item @file{ui-gtk.c} @tab @item @file{ui-gtk.h} @tab @item @file{undo.c} @tab @ref{Modules for Standard Editing Operations}. @item @file{unexaix.c} @tab @ref{Low-Level Modules}. @item @file{unexalpha.c} @tab @ref{Low-Level Modules}. @item @file{unexapollo.c} @tab @ref{Low-Level Modules}. @item @file{unexconvex.c} @tab @ref{Low-Level Modules}. @item @file{unexcw.c} @tab @item @file{unexec.c} @tab @ref{Low-Level Modules}. @item @file{unexelf.c} @tab @ref{Low-Level Modules}. @item @file{unexelfsgi.c} @tab @ref{Low-Level Modules}. @item @file{unexencap.c} @tab @ref{Low-Level Modules}. @item @file{unexenix.c} @tab @ref{Low-Level Modules}. @item @file{unexfreebsd.c} @tab @ref{Low-Level Modules}. @item @file{unexfx2800.c} @tab @ref{Low-Level Modules}. @item @file{unexhp9k3.c} @tab @ref{Low-Level Modules}. @item @file{unexhp9k800.c} @tab @ref{Low-Level Modules}. @item @file{unexmips.c} @tab @ref{Low-Level Modules}. @item @file{unexnext.c} @tab @ref{Low-Level Modules}. @item @file{unexnt.c} @tab @item @file{unexsni.c} @tab @item @file{unexsol2-6.c} @tab @item @file{unexsol2.c} @tab @ref{Low-Level Modules}. @item @file{unexsunos4.c} @tab @ref{Low-Level Modules}. @item @file{unicode.c} @tab @item @file{universe.h} @tab @ref{Low-Level Modules}. @item @file{vm-limit.c} @tab @ref{Low-Level Modules}. @item @file{weak-tests.el} @tab @ref{Modules for Regression Testing}. @item @file{widget.c} @tab @item @file{win32.c} @tab @item @file{window-impl.h} @tab @item @file{window.c} @tab @ref{Modules for the Basic Displayable Lisp Objects}. @item @file{window.h} @tab @ref{Modules for the Basic Displayable Lisp Objects}. @item @file{winslots.h} @tab @item @file{xemacs.def.in.in} @tab @item @file{xgccache.c} @tab @ref{Modules for Interfacing with X Windows}. @item @file{xgccache.h} @tab @ref{Modules for Interfacing with X Windows}. @item @file{xintrinsic.h} @tab @ref{Modules for Interfacing with X Windows}. @item @file{xintrinsicp.h} @tab @ref{Modules for Interfacing with X Windows}. @item @file{xmmanagerp.h} @tab @ref{Modules for Interfacing with X Windows}. @item @file{xmotif.h} @tab @item @file{xmprimitivep.h} @tab @ref{Modules for Interfacing with X Windows}. @item @file{xmu.c} @tab @ref{Modules for Interfacing with X Windows}. @item @file{xmu.h} @tab @ref{Modules for Interfacing with X Windows}. @end multitable @node Low-Level Modules, Basic Lisp Modules, A Summary of the Various XEmacs Modules, The Modules of XEmacs @section Low-Level Modules @cindex low-level modules @cindex modules, low-level @example @file{config.h} @end example This is automatically generated from @file{config.h.in} based on the results of configure tests and user-selected optional features and contains preprocessor definitions specifying the nature of the environment in which XEmacs is being compiled. @example @file{paths.h} @end example This is automatically generated from @file{paths.h.in} based on supplied configure values, and allows for non-standard installed configurations of the XEmacs directories. It's currently broken, though. @example @file{emacs.c} @file{signal.c} @end example @file{emacs.c} contains @code{main()} and other code that performs the most basic environment initializations and handles shutting down the XEmacs process (this includes @code{kill-emacs}, the normal way that XEmacs is exited; @code{dump-emacs}, which is used during the build process to write out the XEmacs executable; @code{run-emacs-from-temacs}, which can be used to start XEmacs directly when temacs has finished loading all the Lisp code; and emergency code to handle crashes [XEmacs tries to auto-save all files before it crashes]). Low-level code that directly interacts with the Unix signal mechanism, however, is in @file{signal.c}. Note that this code does not handle system dependencies in interfacing to signals; that is handled using the @file{syssignal.h} header file, described in section J below. @example @file{unexaix.c} @file{unexalpha.c} @file{unexapollo.c} @file{unexconvex.c} @file{unexec.c} @file{unexelf.c} @file{unexelfsgi.c} @file{unexencap.c} @file{unexenix.c} @file{unexfreebsd.c} @file{unexfx2800.c} @file{unexhp9k3.c} @file{unexhp9k800.c} @file{unexmips.c} @file{unexnext.c} @file{unexsol2.c} @file{unexsunos4.c} @end example These modules contain code dumping out the XEmacs executable on various different systems. (This process is highly machine-specific and requires intimate knowledge of the executable format and the memory map of the process.) Only one of these modules is actually used; this is chosen by @file{configure}. @example @file{ecrt0.c} @file{lastfile.c} @file{pre-crt0.c} @end example These modules are used in conjunction with the dump mechanism. On some systems, an alternative version of the C startup code (the actual code that receives control from the operating system when the process is started, and which calls @code{main()}) is required so that the dumping process works properly; @file{crt0.c} provides this. @file{pre-crt0.c} and @file{lastfile.c} should be the very first and very last file linked, respectively. (Actually, this is not really true. @file{lastfile.c} should be after all Emacs modules whose initialized data should be made constant, and before all other Emacs files and all libraries. In particular, the allocation modules @file{gmalloc.c}, @file{alloca.c}, etc. are normally placed past @file{lastfile.c}, and all of the files that implement Xt widget classes @emph{must} be placed after @file{lastfile.c} because they contain various structures that must be statically initialized and into which Xt writes at various times.) @file{pre-crt0.c} and @file{lastfile.c} contain exported symbols that are used to determine the start and end of XEmacs' initialized data space when dumping. @example @file{inline.c} @end example This module is used in connection with inline functions (available in some compilers). Often, inline functions need to have a corresponding non-inline function that does the same thing. This module is where they reside. It contains no actual code, but defines some special flags that cause inline functions defined in header files to be rendered as actual functions. It then includes all header files that contain any inline function definitions, so that each one gets a real function equivalent. @example @file{debug.c} @file{debug.h} @end example These functions provide a system for doing internal consistency checks during code development. This system is not currently used; instead the simpler @code{assert()} macro is used along with the various checks provided by the @samp{--error-check-*} configuration options. @example @file{universe.h} @end example This is not currently used. @node Basic Lisp Modules, Modules for Standard Editing Operations, Low-Level Modules, The Modules of XEmacs @section Basic Lisp Modules @cindex Lisp modules, basic @cindex modules, basic Lisp @example @file{lisp-disunion.h} @file{lisp-union.h} @file{lisp.h} @file{lrecord.h} @file{symsinit.h} @end example These are the basic header files for all XEmacs modules. Each module includes @file{lisp.h}, which brings the other header files in. @file{lisp.h} contains the definitions of the structures and extractor and constructor macros for the basic Lisp objects and various other basic definitions for the Lisp environment, as well as some general-purpose definitions (e.g. @code{min()} and @code{max()}). @file{lisp.h} includes either @file{lisp-disunion.h} or @file{lisp-union.h}, depending on whether @code{USE_UNION_TYPE} is defined. These files define the typedef of the Lisp object itself (as described above) and the low-level macros that hide the actual implementation of the Lisp object. All extractor and constructor macros for particular types of Lisp objects are defined in terms of these low-level macros. As a general rule, all typedefs should go into the typedefs section of @file{lisp.h} rather than into a module-specific header file even if the structure is defined elsewhere. This allows function prototypes that use the typedef to be placed into other header files. Forward structure declarations (i.e. a simple declaration like @code{struct foo;} where the structure itself is defined elsewhere) should be placed into the typedefs section as necessary. @file{lrecord.h} contains the basic structures and macros that implement all record-type Lisp objects---i.e. all objects whose type is a field in their C structure, which includes all objects except the few most basic ones. @file{lisp.h} contains prototypes for most of the exported functions in the various modules. Lisp primitives defined using @code{DEFUN} that need to be called by C code should be declared using @code{EXFUN}. Other function prototypes should be placed either into the appropriate section of @code{lisp.h}, or into a module-specific header file, depending on how general-purpose the function is and whether it has special-purpose argument types requiring definitions not in @file{lisp.h}.) All initialization functions are prototyped in @file{symsinit.h}. @example @file{alloc.c} @end example The large module @file{alloc.c} implements all of the basic allocation and garbage collection for Lisp objects. The most commonly used Lisp objects are allocated in chunks, similar to the Blocktype data type described above; others are allocated in individually @code{malloc()}ed blocks. This module provides the foundation on which all other aspects of the Lisp environment sit, and is the first module initialized at startup. Note that @file{alloc.c} provides a series of generic functions that are not dependent on any particular object type, and interfaces to particular types of objects using a standardized interface of type-specific methods. This scheme is a fundamental principle of object-oriented programming and is heavily used throughout XEmacs. The great advantage of this is that it allows for a clean separation of functionality into different modules---new classes of Lisp objects, new event interfaces, new device types, new stream interfaces, etc. can be added transparently without affecting code anywhere else in XEmacs. Because the different subsystems are divided into general and specific code, adding a new subtype within a subsystem will in general not require changes to the generic subsystem code or affect any of the other subtypes in the subsystem; this provides a great deal of robustness to the XEmacs code. @example @file{eval.c} @file{backtrace.h} @end example This module contains all of the functions to handle the flow of control. This includes the mechanisms of defining functions, calling functions, traversing stack frames, and binding variables; the control primitives and other special forms such as @code{while}, @code{if}, @code{eval}, @code{let}, @code{and}, @code{or}, @code{progn}, etc.; handling of non-local exits, unwind-protects, and exception handlers; entering the debugger; methods for the subr Lisp object type; etc. It does @emph{not} include the @code{read} function, the @code{print} function, or the handling of symbols and obarrays. @file{backtrace.h} contains some structures related to stack frames and the flow of control. @example @file{lread.c} @end example This module implements the Lisp reader and the @code{read} function, which converts text into Lisp objects, according to the read syntax of the objects, as described above. This is similar to the parser that is a part of all compilers. @example @file{print.c} @end example This module implements the Lisp print mechanism and the @code{print} function and related functions. This is the inverse of the Lisp reader -- it converts Lisp objects to a printed, textual representation. (Hopefully something that can be read back in using @code{read} to get an equivalent object.) @example @file{general.c} @file{symbols.c} @file{symeval.h} @end example @file{symbols.c} implements the handling of symbols, obarrays, and retrieving the values of symbols. Much of the code is devoted to handling the special @dfn{symbol-value-magic} objects that define special types of variables---this includes buffer-local variables, variable aliases, variables that forward into C variables, etc. This module is initialized extremely early (right after @file{alloc.c}), because it is here that the basic symbols @code{t} and @code{nil} are created, and those symbols are used everywhere throughout XEmacs. @file{symeval.h} contains the definitions of symbol structures and the @code{DEFVAR_LISP()} and related macros for declaring variables. @example @file{data.c} @file{floatfns.c} @file{fns.c} @end example These modules implement the methods and standard Lisp primitives for all the basic Lisp object types other than symbols (which are described above). @file{data.c} contains all the predicates (primitives that return whether an object is of a particular type); the integer arithmetic functions; and the basic accessor and mutator primitives for the various object types. @file{fns.c} contains all the standard predicates for working with sequences (where, abstractly speaking, a sequence is an ordered set of objects, and can be represented by a list, string, vector, or bit-vector); it also contains @code{equal}, perhaps on the grounds that bulk of the operation of @code{equal} is comparing sequences. @file{floatfns.c} contains methods and primitives for floats and floating-point arithmetic. @example @file{bytecode.c} @file{bytecode.h} @end example @file{bytecode.c} implements the byte-code interpreter and compiled-function objects, and @file{bytecode.h} contains associated structures. Note that the byte-code @emph{compiler} is written in Lisp. @node Modules for Standard Editing Operations, Modules for Interfacing with the File System, Basic Lisp Modules, The Modules of XEmacs @section Modules for Standard Editing Operations @cindex modules for standard editing operations @cindex editing operations, modules for standard @example @file{buffer.c} @file{buffer.h} @file{bufslots.h} @end example @file{buffer.c} implements the @dfn{buffer} Lisp object type. This includes functions that create and destroy buffers; retrieve buffers by name or by other properties; manipulate lists of buffers (remember that buffers are permanent objects and stored in various ordered lists); retrieve or change buffer properties; etc. It also contains the definitions of all the built-in buffer-local variables (which can be viewed as buffer properties). It does @emph{not} contain code to manipulate buffer-local variables (that's in @file{symbols.c}, described above); or code to manipulate the text in a buffer. @file{buffer.h} defines the structures associated with a buffer and the various macros for retrieving text from a buffer and special buffer positions (e.g. @code{point}, the default location for text insertion). It also contains macros for working with buffer positions and converting between their representations as character offsets and as byte offsets (under MULE, they are different, because characters can be multi-byte). It is one of the largest header files. @file{bufslots.h} defines the fields in the buffer structure that correspond to the built-in buffer-local variables. It is its own header file because it is included many times in @file{buffer.c}, as a way of iterating over all the built-in buffer-local variables. @example @file{insdel.c} @file{insdel.h} @end example @file{insdel.c} contains low-level functions for inserting and deleting text in a buffer, keeping track of changed regions for use by redisplay, and calling any before-change and after-change functions that may have been registered for the buffer. It also contains the actual functions that convert between byte offsets and character offsets. @file{insdel.h} contains associated headers. @example @file{marker.c} @end example This module implements the @dfn{marker} Lisp object type, which conceptually is a pointer to a text position in a buffer that moves around as text is inserted and deleted, so as to remain in the same relative position. This module doesn't actually move the markers around -- that's handled in @file{insdel.c}. This module just creates them and implements the primitives for working with them. As markers are simple objects, this does not entail much. Note that the standard arithmetic primitives (e.g. @code{+}) accept markers in place of integers and automatically substitute the value of @code{marker-position} for the marker, i.e. an integer describing the current buffer position of the marker. @example @file{extents.c} @file{extents.h} @end example This module implements the @dfn{extent} Lisp object type, which is like a marker that works over a range of text rather than a single position. Extents are also much more complex and powerful than markers and have a more efficient (and more algorithmically complex) implementation. The implementation is described in detail in comments in @file{extents.c}. The code in @file{extents.c} works closely with @file{insdel.c} so that extents are properly moved around as text is inserted and deleted. There is also code in @file{extents.c} that provides information needed by the redisplay mechanism for efficient operation. (Remember that extents can have display properties that affect [sometimes drastically, as in the @code{invisible} property] the display of the text they cover.) @example @file{editfns.c} @end example @file{editfns.c} contains the standard Lisp primitives for working with a buffer's text, and calls the low-level functions in @file{insdel.c}. It also contains primitives for working with @code{point} (the default buffer insertion location). @file{editfns.c} also contains functions for retrieving various characteristics from the external environment: the current time, the process ID of the running XEmacs process, the name of the user who ran this XEmacs process, etc. It's not clear why this code is in @file{editfns.c}. @example @file{callint.c} @file{cmds.c} @file{commands.h} @end example @cindex interactive These modules implement the basic @dfn{interactive} commands, i.e. user-callable functions. Commands, as opposed to other functions, have special ways of getting their parameters interactively (by querying the user), as opposed to having them passed in a normal function invocation. Many commands are not really meant to be called from other Lisp functions, because they modify global state in a way that's often undesired as part of other Lisp functions. @file{callint.c} implements the mechanism for querying the user for parameters and calling interactive commands. The bulk of this module is code that parses the interactive spec that is supplied with an interactive command. @file{cmds.c} implements the basic, most commonly used editing commands: commands to move around the current buffer and insert and delete characters. These commands are implemented using the Lisp primitives defined in @file{editfns.c}. @file{commands.h} contains associated structure definitions and prototypes. @example @file{regex.c} @file{regex.h} @file{search.c} @end example @file{search.c} implements the Lisp primitives for searching for text in a buffer, and some of the low-level algorithms for doing this. In particular, the fast fixed-string Boyer-Moore search algorithm is implemented in @file{search.c}. The low-level algorithms for doing regular-expression searching, however, are implemented in @file{regex.c} and @file{regex.h}. These two modules are largely independent of XEmacs, and are similar to (and based upon) the regular-expression routines used in @file{grep} and other GNU utilities. @example @file{doprnt.c} @end example @file{doprnt.c} implements formatted-string processing, similar to @code{printf()} command in C. @example @file{undo.c} @end example This module implements the undo mechanism for tracking buffer changes. Most of this could be implemented in Lisp. @node Modules for Interfacing with the File System, Modules for Other Aspects of the Lisp Interpreter and Object System, Modules for Standard Editing Operations, The Modules of XEmacs @section Modules for Interfacing with the File System @cindex modules for interfacing with the file system @cindex interfacing with the file system, modules for @cindex file system, modules for interfacing with the @example @file{lstream.c} @file{lstream.h} @end example These modules implement the @dfn{stream} Lisp object type. This is an internal-only Lisp object that implements a generic buffering stream. The idea is to provide a uniform interface onto all sources and sinks of data, including file descriptors, stdio streams, chunks of memory, Lisp buffers, Lisp strings, etc. That way, I/O functions can be written to the stream interface and can transparently handle all possible sources and sinks. (For example, the @code{read} function can read data from a file, a string, a buffer, or even a function that is called repeatedly to return data, without worrying about where the data is coming from or what-size chunks it is returned in.) @cindex lstream Note that in the C code, streams are called @dfn{lstreams} (for ``Lisp streams'') to distinguish them from other kinds of streams, e.g. stdio streams and C++ I/O streams. Similar to other subsystems in XEmacs, lstreams are separated into generic functions and a set of methods for the different types of lstreams. @file{lstream.c} provides implementations of many different types of streams; others are provided, e.g., in @file{file-coding.c}. @example @file{fileio.c} @end example This implements the basic primitives for interfacing with the file system. This includes primitives for reading files into buffers, writing buffers into files, checking for the presence or accessibility of files, canonicalizing file names, etc. Note that these primitives are usually not invoked directly by the user: There is a great deal of higher-level Lisp code that implements the user commands such as @code{find-file} and @code{save-buffer}. This is similar to the distinction between the lower-level primitives in @file{editfns.c} and the higher-level user commands in @file{commands.c} and @file{simple.el}. @example @file{filelock.c} @end example This file provides functions for detecting clashes between different processes (e.g. XEmacs and some external process, or two different XEmacs processes) modifying the same file. (XEmacs can optionally use the @file{lock/} subdirectory to provide a form of ``locking'' between different XEmacs processes.) This module is also used by the low-level functions in @file{insdel.c} to ensure that, if the first modification is being made to a buffer whose corresponding file has been externally modified, the user is made aware of this so that the buffer can be synched up with the external changes if necessary. @example @file{filemode.c} @end example This file provides some miscellaneous functions that construct a @samp{rwxr-xr-x}-type permissions string (as might appear in an @file{ls}-style directory listing) given the information returned by the @code{stat()} system call. @example @file{dired.c} @file{ndir.h} @end example These files implement the XEmacs interface to directory searching. This includes a number of primitives for determining the files in a directory and for doing filename completion. (Remember that generic completion is handled by a different mechanism, in @file{minibuf.c}.) @file{ndir.h} is a header file used for the directory-searching emulation functions provided in @file{sysdep.c} (see section J below), for systems that don't provide any directory-searching functions. (On those systems, directories can be read directly as files, and parsed.) @example @file{realpath.c} @end example This file provides an implementation of the @code{realpath()} function for expanding symbolic links, on systems that don't implement it or have a broken implementation. @node Modules for Other Aspects of the Lisp Interpreter and Object System, Modules for Interfacing with the Operating System, Modules for Interfacing with the File System, The Modules of XEmacs @section Modules for Other Aspects of the Lisp Interpreter and Object System @cindex modules for other aspects of the Lisp interpreter and object system @cindex Lisp interpreter and object system, modules for other aspects of the @cindex interpreter and object system, modules for other aspects of the Lisp @cindex object system, modules for other aspects of the Lisp interpreter and @example @file{elhash.c} @file{elhash.h} @file{hash.c} @file{hash.h} @end example These files provide two implementations of hash tables. Files @file{hash.c} and @file{hash.h} provide a generic C implementation of hash tables which can stand independently of XEmacs. Files @file{elhash.c} and @file{elhash.h} provide a separate implementation of hash tables that can store only Lisp objects, and knows about Lispy things like garbage collection, and implement the @dfn{hash-table} Lisp object type. @example @file{specifier.c} @file{specifier.h} @end example This module implements the @dfn{specifier} Lisp object type. This is primarily used for displayable properties, and allows for values that are specific to a particular buffer, window, frame, device, or device class, as well as a default value existing. This is used, for example, to control the height of the horizontal scrollbar or the appearance of the @code{default}, @code{bold}, or other faces. The specifier object consists of a number of specifications, each of which maps from a buffer, window, etc. to a value. The function @code{specifier-instance} looks up a value given a window (from which a buffer, frame, and device can be derived). @example @file{chartab.c} @file{chartab.h} @file{casetab.c} @end example @file{chartab.c} and @file{chartab.h} implement the @dfn{char table} Lisp object type, which maps from characters or certain sorts of character ranges to Lisp objects. The implementation of this object type is optimized for the internal representation of characters. Char tables come in different types, which affect the allowed object types to which a character can be mapped and also dictate certain other properties of the char table. @cindex case table @file{casetab.c} implements one sort of char table, the @dfn{case table}, which maps characters to other characters of possibly different case. These are used by XEmacs to implement case-changing primitives and to do case-insensitive searching. @example @file{syntax.c} @file{syntax.h} @end example @cindex scanner This module implements @dfn{syntax tables}, another sort of char table that maps characters into syntax classes that define the syntax of these characters (e.g. a parenthesis belongs to a class of @samp{open} characters that have corresponding @samp{close} characters and can be nested). This module also implements the Lisp @dfn{scanner}, a set of primitives for scanning over text based on syntax tables. This is used, for example, to find the matching parenthesis in a command such as @code{forward-sexp}, and by @file{font-lock.c} to locate quoted strings, comments, etc. @c #### Break this out into a separate node somewhere! Syntax codes are implemented as bitfields in an int. Bits 0-6 contain the syntax code itself, bit 7 is a special prefix flag used for Lisp, and bits 16-23 contain comment syntax flags. From the Lisp programmer's point of view, there are 11 flags: 2 styles X 2 characters X @{start, end@} flags for two-character comment delimiters, 2 style flags for one-character comment delimiters, and the prefix flag. Internally, however, the characters used in multi-character delimiters will have non-comment-character syntax classes (@emph{e.g.}, the @samp{/} in C's @samp{/*} comment-start delimiter has ``punctuation'' (here meaning ``operator-like'') class in C modes). Thus in a mixed comment style, such as C++'s @samp{//} to end of line, is represented by giving @samp{/} the ``punctuation'' class and the ``style b first character of start sequence'' and ``style b second character of start sequence'' flags. The fact that class is @emph{not} punctuation allows the syntax scanner to recognize that this is a multi-character delimiter. The @samp{newline} character is given (single-character) ``comment-end'' @emph{class} and the ``style b first character of end sequence'' @emph{flag}. The ``comment-end'' class allows the scanner to determine that no second character is needed to terminate the comment. There used to be a syntax class @samp{Sextword}. A character of @samp{Sextword} class is a word-constituent but a word boundary may exist between two such characters. Ken'ichi HANDA <handa@@etl.go.jp> explains the purpose of the Sextword syntax category: @quotation Japanese words are not separated by spaces, which makes finding word boundaries very difficult. Theoretically it's impossible without using natural language processing techniques. But, by defining pseudo-words as below (much simplified for letting you understand it easily) for Japanese, we can have a convenient forward-word function for Japanese. @display A Japanese word is a sequence of characters that consists of zero or more Kanji characters followed by zero or more Hiragana characters. @end display Then, the problem is that now we can't say that a sequence of word-constituents makes up a word. For instance, both Hiragana "A" and Kanji "KAN" are word-constituents but the sequence of these two letters can't be a single word. So, we introduced Sextword for Japanese letters. @end quotation There seems to have been some controversy about this category, as it has been removed, readded, and removed again. Currently neither GNU Emacs (21.3.99) nor XEmacs (21.5.17) seems to use it. @example @file{casefiddle.c} @end example This module implements various Lisp primitives for upcasing, downcasing and capitalizing strings or regions of buffers. @example @file{rangetab.c} @end example This module implements the @dfn{range table} Lisp object type, which provides for a mapping from ranges of integers to arbitrary Lisp objects. @example @file{opaque.c} @file{opaque.h} @end example This module implements the @dfn{opaque} Lisp object type, an internal-only Lisp object that encapsulates an arbitrary block of memory so that it can be managed by the Lisp allocation system. To create an opaque object, you call @code{make_opaque()}, passing a pointer to a block of memory. An object is created that is big enough to hold the memory, which is copied into the object's storage. The object will then stick around as long as you keep pointers to it, after which it will be automatically reclaimed. @cindex mark method Opaque objects can also have an arbitrary @dfn{mark method} associated with them, in case the block of memory contains other Lisp objects that need to be marked for garbage-collection purposes. (If you need other object methods, such as a finalize method, you should just go ahead and create a new Lisp object type---it's not hard.) @example @file{abbrev.c} @end example This function provides a few primitives for doing dynamic abbreviation expansion. In XEmacs, most of the code for this has been moved into Lisp. Some C code remains for speed and because the primitive @code{self-insert-command} (which is executed for all self-inserting characters) hooks into the abbrev mechanism. (@code{self-insert-command} is itself in C only for speed.) @example @file{doc.c} @end example This function provides primitives for retrieving the documentation strings of functions and variables. These documentation strings contain certain special markers that get dynamically expanded (e.g. a reverse-lookup is performed on some named functions to retrieve their current key bindings). Some documentation strings (in particular, for the built-in primitives and pre-loaded Lisp functions) are stored externally in a file @file{DOC} in the @file{lib-src/} directory and need to be fetched from that file. (Part of the build stage involves building this file, and another part involves constructing an index for this file and embedding it into the executable, so that the functions in @file{doc.c} do not have to search the entire @file{DOC} file to find the appropriate documentation string.) @example @file{md5.c} @end example This function provides a Lisp primitive that implements the MD5 secure hashing scheme, used to create a large hash value of a string of data such that the data cannot be derived from the hash value. This is used for various security applications on the Internet. @node Modules for Interfacing with the Operating System, , Modules for Other Aspects of the Lisp Interpreter and Object System, The Modules of XEmacs @section Modules for Interfacing with the Operating System @cindex modules for interfacing with the operating system @cindex interfacing with the operating system, modules for @cindex operating system, modules for interfacing with the @example @file{process.el} @file{process.c} @file{process.h} @end example These modules allow XEmacs to spawn and communicate with subprocesses and network connections. @cindex synchronous subprocesses @cindex subprocesses, synchronous @file{process.el} implements (through the @code{call-process} primitive) what are called @dfn{synchronous subprocesses}. This means that XEmacs runs a program, waits till it's done, and retrieves its output. A typical example might be calling the @file{ls} program to get a directory listing. @cindex asynchronous subprocesses @cindex subprocesses, asynchronous @file{process.c} and @file{process.h} implement @dfn{asynchronous subprocesses}. This means that XEmacs starts a program and then continues normally, not waiting for the process to finish. Data can be sent to the process or retrieved from it as it's running. This is used for the @code{shell} command (which provides a front end onto a shell program such as @file{csh}), the mail and news readers implemented in XEmacs, etc. The result of calling @code{start-process} to start a subprocess is a process object, a particular kind of object used to communicate with the subprocess. You can send data to the process by passing the process object and the data to @code{send-process}, and you can specify what happens to data retrieved from the process by setting properties of the process object. (When the process sends data, XEmacs receives a process event, which says that there is data ready. When @code{dispatch-event} is called on this event, it reads the data from the process and does something with it, as specified by the process object's properties. Typically, this means inserting the data into a buffer or calling a function.) Another property of the process object is called the @dfn{sentinel}, which is a function that is called when the process terminates. @cindex network connections Process objects are also used for network connections (connections to a process running on another machine). Network connections are started with @code{open-network-stream} but otherwise work just like subprocesses. @example @file{sysdep.c} @file{sysdep.h} @end example These modules implement most of the low-level, messy operating-system interface code. This includes various device control (ioctl) operations for file descriptors, TTY's, pseudo-terminals, etc. (usually this stuff is fairly system-dependent; thus the name of this module), and emulation of standard library functions and system calls on systems that don't provide them or have broken versions. @example @file{sysdir.h} @file{sysfile.h} @file{sysfloat.h} @file{sysproc.h} @file{syspwd.h} @file{syssignal.h} @file{systime.h} @file{systty.h} @file{syswait.h} @end example These header files provide consistent interfaces onto system-dependent header files and system calls. The idea is that, instead of including a standard header file like @file{<sys/param.h>} (which may or may not exist on various systems) or having to worry about whether all system provide a particular preprocessor constant, or having to deal with the four different paradigms for manipulating signals, you just include the appropriate @file{sys*.h} header file, which includes all the right system header files, defines and missing preprocessor constants, provides a uniform interface onto system calls, etc. @file{sysdir.h} provides a uniform interface onto directory-querying functions. (In some cases, this is in conjunction with emulation functions in @file{sysdep.c}.) @file{sysfile.h} includes all the necessary header files for standard system calls (e.g. @code{read()}), ensures that all necessary @code{open()} and @code{stat()} preprocessor constants are defined, and possibly (usually) substitutes sugared versions of @code{read()}, @code{write()}, etc. that automatically restart interrupted I/O operations. @file{sysfloat.h} includes the necessary header files for floating-point operations. @file{sysproc.h} includes the necessary header files for calling @code{select()}, @code{fork()}, @code{execve()}, socket operations, and the like, and ensures that the @code{FD_*()} macros for descriptor-set manipulations are available. @file{syspwd.h} includes the necessary header files for obtaining information from @file{/etc/passwd} (the functions are emulated under VMS). @file{syssignal.h} includes the necessary header files for signal-handling and provides a uniform interface onto the different signal-handling and signal-blocking paradigms. @file{systime.h} includes the necessary header files and provides uniform interfaces for retrieving the time of day, setting file access/modification times, getting the amount of time used by the XEmacs process, etc. @file{systty.h} buffers against the infinitude of different ways of controlling TTY's. @file{syswait.h} provides a uniform way of retrieving the exit status from a @code{wait()}ed-on process (some systems use a union, others use an int). @example @file{hpplay.c} @file{libsst.c} @file{libsst.h} @file{libst.h} @file{linuxplay.c} @file{nas.c} @file{sgiplay.c} @file{sound.c} @file{sunplay.c} @end example These files implement the ability to play various sounds on some types of computers. You have to configure your XEmacs with sound support in order to get this capability. @file{sound.c} provides the generic interface. It implements various Lisp primitives and variables that let you specify which sounds should be played in certain conditions. (The conditions are identified by symbols, which are passed to @code{ding} to make a sound. Various standard functions call this function at certain times; if sound support does not exist, a simple beep results. @cindex native sound @cindex sound, native @file{sgiplay.c}, @file{sunplay.c}, @file{hpplay.c}, and @file{linuxplay.c} interface to the machine's speaker for various different kind of machines. This is called @dfn{native} sound. @cindex sound, network @cindex network sound @cindex NAS @file{nas.c} interfaces to a computer somewhere else on the network using the NAS (Network Audio Server) protocol, playing sounds on that machine. This allows you to run XEmacs on a remote machine, with its display set to your local machine, and have the sounds be made on your local machine, provided that you have a NAS server running on your local machine. @file{libsst.c}, @file{libsst.h}, and @file{libst.h} provide some additional functions for playing sound on a Sun SPARC but are not currently in use. @example @file{tooltalk.c} @file{tooltalk.h} @end example These two modules implement an interface to the ToolTalk protocol, which is an interprocess communication protocol implemented on some versions of Unix. ToolTalk is a high-level protocol that allows processes to register themselves as providers of particular services; other processes can then request a service without knowing or caring exactly who is providing the service. It is similar in spirit to the DDE protocol provided under Microsoft Windows. ToolTalk is a part of the new CDE (Common Desktop Environment) specification and is used to connect the parts of the SPARCWorks development environment. @example @file{getloadavg.c} @end example This module provides the ability to retrieve the system's current load average. (The way to do this is highly system-specific, unfortunately, and requires a lot of special-case code.) @example @file{sunpro.c} @end example This module provides a small amount of code used internally at Sun to keep statistics on the usage of XEmacs. @example @file{broken-sun.h} @file{strcmp.c} @file{strcpy.c} @file{sunOS-fix.c} @end example These files provide replacement functions and prototypes to fix numerous bugs in early releases of SunOS 4.1. @example @file{hftctl.c} @end example This module provides some terminal-control code necessary on versions of AIX prior to 4.1. @node Rules When Writing New C Code, Regression Testing XEmacs, The Modules of XEmacs, Top @chapter Rules When Writing New C Code @cindex writing new C code, rules when @cindex C code, rules when writing new @cindex code, rules when writing new C The XEmacs C Code is extremely complex and intricate, and there are many rules that are more or less consistently followed throughout the code. Many of these rules are not obvious, so they are explained here. It is of the utmost importance that you follow them. If you don't, you may get something that appears to work, but which will crash in odd situations, often in code far away from where the actual breakage is. @menu * Introduction to Writing C Code:: * Writing New Modules:: * Working with Lisp Objects:: * Writing Lisp Primitives:: * Writing Good Comments:: * Adding Global Lisp Variables:: * Writing Macros:: * Proper Use of Unsigned Types:: * Major Textual Changes:: * Debugging and Testing:: @end menu See also @ref{Coding for Mule}. @node Introduction to Writing C Code, Writing New Modules, Rules When Writing New C Code, Rules When Writing New C Code @section Introduction to Writing C Code @cindex introduction to writing c code @cindex coding conventions The C code is actually written in a dialect of C called @dfn{Clean C}, meaning that it can be compiled, mostly warning-free, with either a C or C++ compiler. Coding in Clean C has several advantages over plain C. C++ compilers are more nit-picking, and a number of coding errors have been found by compiling with C++. The ability to use both C and C++ tools means that a greater variety of development tools are available to the developer. In addition, the ability to overload operators in C++ means it is possible, for error-checking purposes, to redefine certain simple types (normally defined as aliases for simple built-in types such as @code{unsigned char} or @code{long}) as classes, strictly limiting the permissible operations and catching illegal implicit casts and such. XEmacs follows the GNU coding standards, which are documented separately in @xref{top,,, standards, GNU Coding Standards}. This section mainly documents standards that are not included in that document; typically this consists of standards that are specifically relevant to the XEmacs code itself. First, a recap of the GNU standards: @itemize @bullet @item Put a space after every comma. @item Put a space before the parenthesis that begins a function call, macro call, function declaration or definition, or control statement (if, while, switch, for). (DO NOT do this for macro definitions; this is invalid preprocessor syntax.) @item The brace that begins a control statement (if, while, for, switch, do) or a function definition should go on a line by itself. @item In function definitions, put the return type and all other qualifiers on a line before the function name. Thus, the function name is always at the beginning of a line. @item Indentation level is two spaces. (However, the first and following statements of a while/for/if/etc. block are indented four spaces from the while/for/if keyword. The opening and closing braces are indented two spaces.) @item Variable and function names should be all lowercase, with underscores separating words, except for a prefixing tag, which may be in uppercase. Do not use the mixed-case convention (e.g. SetVariableToValue ()) and *especially* do not use Microsoft Hungarian notation (char **rgszRedundantTag). @item preprocessor and enum constants should be all uppercase, and should be prefixed with a tag that groups related constants together. @end itemize Now, the XEmacs coding standards: @subheading Specially-prefixed functions/variables: @itemize @bullet @item All global C variables whose value is constant and is a symbol begin with a capital Q, e.g. Qkey_press_event. (The type will always be Lisp_Object.) @item All other global C variables whose value is a Lisp_Object (this includes variables that forward into Lisp variables plus others like Vselected_console) begin with a capital V. @item No C variables whose value is other than a Lisp_Object should begin with a capital V. (This includes C variables that forward into integer or boolean Lisp variables.) @item All global C variables whose value is a struct Lisp_Subr begin with a capital S. (This only occurs in connection with DEFUN ()). @item All C functions that are Lisp primitives begin with a capital F, and no others should begin this way. @end itemize @subheading Functions for manipulating Lisp types: @itemize @bullet @item Any function that creates an empty or mostly empty Lisp object should begin allocate_(). (*Not* make_().) (Except, of course, for Lisp primitives, which usually begin Fmake_()). @item Any function that converts a pointer into an equivalent Lisp_Object should begin make_(). @item Any function that converts a Lisp_Object into its equivalent pointer and checks the type and validity of the object (e.g. making sure it's not dead) should begin decode_(). @item Any function that looks up a Lisp object (e.g. buffer, face) given a symbol or string should begin get_(). (Except, of course, for Lisp primitives, which usually begin Fget_()). @end itemize @subheading Other: @itemize @bullet @item Any header-file declarations of the sort struct foobar; go into the "types" section of lisp.h. @end itemize @node Writing New Modules, Working with Lisp Objects, Introduction to Writing C Code, Rules When Writing New C Code @section Writing New Modules @cindex writing new modules Every module includes @file{<config.h>} (angle brackets so that @samp{--srcdir} works correctly; @file{config.h} may or may not be in the same directory as the C sources) and @file{lisp.h}. @file{config.h} must always be included before any other header files (including system header files) to ensure that certain tricks played by various @file{s/} and @file{m/} files work out correctly. When including header files, always use angle brackets, not double quotes, except when the file to be included is always in the same directory as the including file. If either file is a generated file, then that is not likely to be the case. In order to understand why we have this rule, imagine what happens when you do a build in the source directory using @samp{./configure} and another build in another directory using @samp{../work/configure}. There will be two different @file{config.h} files. Which one will be used if you @samp{#include "config.h"}? Almost every module contains a @code{syms_of_*()} function and a @code{vars_of_*()} function. The former declares any Lisp primitives you have defined and defines any symbols you will be using. The latter declares any global Lisp variables you have added and initializes global C variables in the module. @strong{Important}: There are stringent requirements on exactly what can go into these functions. See the comment in @file{emacs.c}. The reason for this is to avoid obscure unwanted interactions during initialization. If you don't follow these rules, you'll be sorry! If you want to do anything that isn't allowed, create a @code{complex_vars_of_*()} function for it. Doing this is tricky, though: you have to make sure your function is called at the right time so that all the initialization dependencies work out. Declare each function of these kinds in @file{symsinit.h}. Make sure it's called in the appropriate place in @file{emacs.c}. You never need to include @file{symsinit.h} directly, because it is included by @file{lisp.h}. @strong{All global and static variables that are to be modifiable must be declared uninitialized.} This means that you may not use the ``declare with initializer'' form for these variables, such as @code{int some_variable = 0;}. The reason for this has to do with some kludges done during the dumping process: If possible, the initialized data segment is re-mapped so that it becomes part of the (unmodifiable) code segment in the dumped executable. This allows this memory to be shared among multiple running XEmacs processes. XEmacs is careful to place as much constant data as possible into initialized variables during the @file{temacs} phase. @cindex copy-on-write @strong{Please note:} This kludge only works on a few systems nowadays, and is rapidly becoming irrelevant because most modern operating systems provide @dfn{copy-on-write} semantics. All data is initially shared between processes, and a private copy is automatically made (on a page-by-page basis) when a process first attempts to write to a page of memory. Formerly, there was a requirement that static variables not be declared inside of functions. This had to do with another hack along the same vein as what was just described: old USG systems put statically-declared variables in the initialized data space, so those header files had a @code{#define static} declaration. (That way, the data-segment remapping described above could still work.) This fails badly on static variables inside of functions, which suddenly become automatic variables; therefore, you weren't supposed to have any of them. This awful kludge has been removed in XEmacs because @enumerate @item almost all of the systems that used this kludge ended up having to disable the data-segment remapping anyway; @item the only systems that didn't were extremely outdated ones; @item this hack completely messed up inline functions. @end enumerate Here are things to know when you create a new source file: @itemize @bullet @item All @file{.c} files should @code{#include <config.h>} first. Almost all @file{.c} files should @code{#include "lisp.h"} second. @item Generated header files should be included using the @samp{#include <...>} syntax, not the @samp{#include "..."} syntax. The generated headers are: @file{config.h sheap-adjust.h paths.h Emacs.ad.h} The basic rule is that you should assume builds using @samp{--srcdir} and the @samp{#include <...>} syntax needs to be used when the to-be-included generated file is in a potentially different directory @emph{at compile time}. The non-obvious C rule is that @samp{#include "..."} means to search for the included file in the same directory as the including file, @emph{not} in the current directory. Normally this is not a problem but when building with @samp{--srcdir}, @file{make} will search the @samp{VPATH} for you, while the C compiler knows nothing about it. @item Header files should @emph{not} include @samp{<config.h>} and @samp{"lisp.h"}. It is the responsibility of the @file{.c} files that use it to do so. @end itemize @node Working with Lisp Objects, Writing Lisp Primitives, Writing New Modules, Rules When Writing New C Code @section Working with Lisp Objects @cindex working with lisp objects @subheading Conventions involving Lisp objects Of course the low-level implementation language of XEmacs is C, but much of that uses the Lisp engine to do its work. However, because the code is ``inside'' of the protective containment shell around the ``reactor core,'' you'll see lots of complex ``plumbing'' needed to do the work and ``safety mechanisms,'' whose failure results in a meltdown. This section provides a quick overview (or review) of the various components of the implementation of Lisp objects. Two typographic conventions help to identify C objects that implement Lisp objects. The first is that capitalized identifiers, especially beginning with the letters @samp{Q}, @samp{V}, @samp{F}, and @samp{S}, for C variables and functions, and C macros with beginning with the letter @samp{X}, are used to implement Lisp. The second is that where Lisp uses the hyphen @samp{-} in symbol names, the corresponding C identifiers use the underscore @samp{_}. Of course, since XEmacs Lisp contains interfaces to many external libraries, those external names will follow the coding conventions their authors chose, and may overlap the ``XEmacs name space.'' However these cases are usually pretty obvious. All Lisp objects are handled indirectly. The @code{Lisp_Object} type is usually a pointer to a structure, except for a very small number of types with immediate representations (currently characters and integers). However, these types cannot be directly operated on in C code, either, so they can also be considered indirect. Types that do not have an immediate representation always have a C typedef @code{Lisp_@var{type}} for a corresponding structure. @c #### mention l(c)records here? In older code, it was common practice to pass around pointers to @code{Lisp_@var{type}}, but this is now deprecated in favor of using @code{Lisp_Object} for all function arguments and return values that are Lisp objects. The @code{X@var{type}} macro is used to extract the pointer and cast it to @code{(Lisp_@var{type} *)} for the desired type. @strong{Convention}: macros whose names begin with @samp{X} operate on @code{Lisp_Object}s and do no type-checking. Many such macros are type extractors, but others implement Lisp operations in C (@emph{e.g.}, @code{XCAR} implements the Lisp @code{car} function). These are unsafe, and must only be used where types of all data have already been checked. Such macros are only applied to @code{Lisp_Object}s. In internal implementations where the pointer has already been converted, the structure is operated on directly using the C @code{->} member access operator. The @code{@var{type}P}, @code{CHECK_@var{type}}, and @code{CONCHECK_@var{type}} macros are used to test types. The first returns a Boolean value, and the latter signal errors. (The @samp{CONCHECK} variety allows execution to be CONtinued under some circumstances, thus the name.) Functions which expect to be passed user data invariably call @samp{CHECK} macros on arguments. There are many types of specialized Lisp objects implemented in C, but the most pervasive type is the @dfn{symbol}. Symbols are used as identifiers, variables, and functions. @strong{Convention}: Global variables whose names begin with @samp{Q} are constants whose value is a symbol. The name of the variable should be derived from the name of the symbol using the same rules as for Lisp primitives. Such variables allow the C code to check whether a particular @code{Lisp_Object} is equal to a given symbol. Symbols are Lisp objects, so these variables may be passed to Lisp primitives. (An alternative to the use of @samp{Q...} variables is to call the @code{intern} function at initialization in the @code{vars_of_@var{module}} function, which is hardly less efficient.) @strong{Convention}: Global variables whose names begin with @samp{V} are variables that contain Lisp objects. The convention here is that all global variables of type @code{Lisp_Object} begin with @samp{V}, and no others do (not even integer and boolean variables that have Lisp equivalents). Most of the time, these variables have equivalents in Lisp, which are defined via the @samp{DEFVAR} family of macros, but some don't. Since the variable's value is a @code{Lisp_Object}, it can be passed to Lisp primitives. The implementation of Lisp primitives is more complex. @strong{Convention}: Global variables with names beginning with @samp{S} contain a structure that allows the Lisp engine to identify and call a C function. In modern versions of XEmacs, these identifiers are almost always completely hidden in the @code{DEFUN} and @code{SUBR} macros, but you will encounter them if you look at very old versions of XEmacs or at GNU Emacs. @strong{Convention}: Functions with names beginning with @samp{F} implement Lisp primitives. Of course all their arguments and their return values must be Lisp_Objects. (This is hidden in the @code{DEFUN} macro.) @subheading Working with Lisp lists Lisp lists are popular data structures in the C code as well as in Elisp. There are two sets of macros that iterate over lists. @code{EXTERNAL_LIST_LOOP_@var{n}} should be used when the list has been supplied by the user, and cannot be trusted to be acyclic and @code{nil}-terminated. A @code{malformed-list} or @code{circular-list} error will be generated if the list being iterated over is not entirely kosher. @code{LIST_LOOP_@var{n}}, on the other hand, is faster and less safe, and can be used only on trusted lists. Related macros are @code{GET_EXTERNAL_LIST_LENGTH} and @code{GET_LIST_LENGTH}, which calculate the length of a list, and in the case of @code{GET_EXTERNAL_LIST_LENGTH}, validating the properness of the list. The macros @code{EXTERNAL_LIST_LOOP_DELETE_IF} and @code{LIST_LOOP_DELETE_IF} delete elements from a lisp list satisfying some predicate. @subheading Implementation of Lisp objects At the lowest levels, XEmacs makes heavy use of object-oriented techniques to promote code-sharing and uniform interfaces for different devices and platforms. Commonly, but not always, such objects are ``wrapped'' and exported to Lisp as Lisp objects. Usually they use the internal structures developed for Lisp objects (the @samp{lrecord} structure) in order to take advantage of Lisp memory management. Unfortunately, XEmacs was originally written in C, so these techniques are based on heavy use of C macros. @c You can't use @var{} for type below, because case is important. A module defining a class is likely to use most of the following declarations and macros. In the following, the notation @samp{<type>} will stand for the full name of the class, and will be capitalized in the way normal for its context. The notation @samp{<typ>} will stand for the abbreviated form commonly used in macro names, while @samp{ty} will be used as the typical name for instances of the class. (See the entry for @samp{MAYBE_<TY>METH} below for an example using all three notations.) In the interface (@file{.h} file), the following declarations are used often. Others may be used in for particular modules. Since they're quite short in most cases, the definitions are given as well. The generic macros used are defined in @file{lisp.h} or @file{lrecord.h}. @c #### reorganize this table into stuff used in general code, and stuff @c used only in declarations or initializations @table @samp @c #### declaration @item typedef struct Lisp_<Type> Lisp_<Type> This refers to the internal structure used by C code. The XEmacs coding style now forbids passing pointers to @samp{Lisp_<Type>} structures into or out of a function; instead, a @samp{Lisp_Object} should be passed or returned (created using @samp{wrap_<type>}, if necessary). @c #### declaration @item DECLARE_LRECORD (<type>, Lisp_<Type>) Declares an @samp{lrecord} for @samp{<Type>}, which is the unit of allocation. @item #define X<TYPE>(x) XRECORD (x, <type>, Lisp_<Type>) Turns a @code{Lisp_Object} into a pointer to @samp{struct Lisp_<Type>}. @item #define wrap_<type>(p) wrap_record (p, <type>) Turns a pointer to @samp{struct Lisp_<Type>} into a @code{Lisp_Object}. @item #define <TYPE>P(x) RECORDP (x, <type>) Tests whether a given @code{Lisp_Object} is of type @samp{Lisp_<Type>}. Returns a C int, not a Lisp Boolean value. @item #define CHECK_<TYPE>(x) CHECK_RECORD (x, <type>) @itemx #define CONCHECK_<TYPE>(x) CONCHECK_RECORD (x, <type>) Tests whether a given @code{Lisp_Object} is of type @samp{Lisp_<Type>}, and signals a Lisp error if not. The @samp{CHECK} version of the macro never returns if the type is wrong, while the @samp{CONCHECK} version can return if the user catches it in the debugger and explicitly requests a return. @item #define RAW_<TYP>METH(ty, m) ((ty)->methods->m##_method) Return a function pointer for the method for an object @var{TY} of class @samp{Lisp_<Type>}, or @samp{NULL} if there is none for this type. @item #define HAS_<TYP>METH_P(ty, m) (!!RAW_<TYP>METH (ty, m)) Test whether the class that @var{TY} is an instance of has the method. @item #define <TYP>METH(ty, m, args) ((RAW_<TYP>METH (ty, m)) args) Call the method on @samp{args}. @samp{args} must be enclosed in parentheses in the call. It is the programmer's responsibility to ensure that the method is available. The standard convenience macro @samp{MAYBE_<TYP>METH} is often provided for the common case where a void-returning method of @samp{Type} is called. @item #define MAYBE_<TYP>METH(ty, m, args) do @{ ... @} while (0) Call a void-returning @samp{<Type>} method, if it exists. Note the use of the @samp{do ... while (0)} idiom to give the macro call C statement semantics. The full definition is equally idiomatic: @example #define MAYBE_<TYP>METH(ty, m, args) do @{ \ Lisp_<Type> *maybe_<typ>meth_ty = (ty); \ if (HAS_<TYP>METH_P (maybe_<typ>meth_ty, m)) \ <TYP>METH (maybe_<typ>meth_ty, m, args); \ @} while (0) @end example @end table The use of macros for invoking an object's methods makes life a bit difficult for the student or maintainer when browsing the code. In particular, calls are of the form @samp{<TYP>METH (ty, some_method, (x, y))}, but definitions typically are for @samp{<subtype>_some_method}. Thus, when you are trying to find calls, you need to grep for @samp{some_method}, but this will also catch calls and definitions of that method for instances of other subtypes of @samp{<Type>}, and there may be a rather large number of them. @cindex Lisp object types, creating @cindex creating Lisp object types @cindex object types, creating Lisp Here is a checklist of things to do when creating a new lisp object type named @var{foo}: @enumerate @item create @var{foo}.h @item create @var{foo}.c @item add definitions of @code{syms_of_@var{foo}}, etc. to @file{@var{foo}.c} @item add declarations of @code{syms_of_@var{foo}}, etc. to @file{symsinit.h} @item add calls to @code{syms_of_@var{foo}}, etc. to @file{emacs.c} @item add definitions of macros like @code{CHECK_@var{FOO}} and @code{@var{FOO}P} to @file{@var{foo}.h} @item add the new type index to @code{enum lrecord_type} @item add a DEFINE_LRECORD_IMPLEMENTATION call to @file{@var{foo}.c} @item add an INIT_LRECORD_IMPLEMENTATION call to @code{syms_of_@var{foo}.c} @end enumerate @node Writing Lisp Primitives, Writing Good Comments, Working with Lisp Objects, Rules When Writing New C Code @section Writing Lisp Primitives @cindex writing Lisp primitives @cindex Lisp primitives, writing @cindex primitives, writing Lisp Lisp primitives are Lisp functions implemented in C. The details of interfacing the C function so that Lisp can call it are handled by a few C macros. The only way to really understand how to write new C code is to read the source, but we can explain some things here. An example of a special form is the definition of @code{prog1}, from @file{eval.c}. (An ordinary function would have the same general appearance.) @cindex garbage collection protection @smallexample @group DEFUN ("prog1", Fprog1, 1, UNEVALLED, 0, /* Similar to `progn', but the value of the first form is returned. \(prog1 FIRST BODY...): All the arguments are evaluated sequentially. The value of FIRST is saved during evaluation of the remaining args, whose values are discarded. */ (args)) @{ /* This function can GC */ REGISTER Lisp_Object val, form, tail; struct gcpro gcpro1; val = Feval (XCAR (args)); GCPRO1 (val); LIST_LOOP_3 (form, XCDR (args), tail) Feval (form); UNGCPRO; return val; @} @end group @end smallexample Let's start with a precise explanation of the arguments to the @code{DEFUN} macro. Here is a template for them: @example @group DEFUN (@var{lname}, @var{fname}, @var{min_args}, @var{max_args}, @var{interactive}, /* @var{docstring} */ (@var{arglist})) @end group @end example @table @var @item lname This string is the name of the Lisp symbol to define as the function name; in the example above, it is @code{"prog1"}. @item fname This is the C function name for this function. This is the name that is used in C code for calling the function. The name is, by convention, @samp{F} prepended to the Lisp name, with all dashes (@samp{-}) in the Lisp name changed to underscores. Thus, to call this function from C code, call @code{Fprog1}. Remember that the arguments are of type @code{Lisp_Object}; various macros and functions for creating values of type @code{Lisp_Object} are declared in the file @file{lisp.h}. Primitives whose names are special characters (e.g. @code{+} or @code{<}) are named by spelling out, in some fashion, the special character: e.g. @code{Fplus()} or @code{Flss()}. Primitives whose names begin with normal alphanumeric characters but also contain special characters are spelled out in some creative way, e.g. @code{let*} becomes @code{FletX()}. Each function also has an associated structure that holds the data for the subr object that represents the function in Lisp. This structure conveys the Lisp symbol name to the initialization routine that will create the symbol and store the subr object as its definition. The C variable name of this structure is always @samp{S} prepended to the @var{fname}. You hardly ever need to be aware of the existence of this structure, since @code{DEFUN} plus @code{DEFSUBR} takes care of all the details. @item min_args This is the minimum number of arguments that the function requires. The function @code{prog1} allows a minimum of one argument. @item max_args This is the maximum number of arguments that the function accepts, if there is a fixed maximum. Alternatively, it can be @code{UNEVALLED}, indicating a special form that receives unevaluated arguments, or @code{MANY}, indicating an unlimited number of evaluated arguments (the C equivalent of @code{&rest}). Both @code{UNEVALLED} and @code{MANY} are macros. If @var{max_args} is a number, it may not be less than @var{min_args} and it may not be greater than 8. (If you need to add a function with more than 8 arguments, use the @code{MANY} form. Resist the urge to edit the definition of @code{DEFUN} in @file{lisp.h}. If you do it anyways, make sure to also add another clause to the switch statement in @code{primitive_funcall().}) @item interactive This is an interactive specification, a string such as might be used as the argument of @code{interactive} in a Lisp function. In the case of @code{prog1}, it is 0 (a null pointer), indicating that @code{prog1} cannot be called interactively. A value of @code{""} indicates a function that should receive no arguments when called interactively. @item docstring This is the documentation string. It is written just like a documentation string for a function defined in Lisp; in particular, the first line should be a single sentence. Note how the documentation string is enclosed in a comment, none of the documentation is placed on the same lines as the comment-start and comment-end characters, and the comment-start characters are on the same line as the interactive specification. @file{make-docfile}, which scans the C files for documentation strings, is very particular about what it looks for, and will not properly extract the doc string if it's not in this exact format. In order to make both @file{etags} and @file{make-docfile} happy, make sure that the @code{DEFUN} line contains the @var{lname} and @var{fname}, and that the comment-start characters for the doc string are on the same line as the interactive specification, and put a newline directly after them (and before the comment-end characters). @item arglist This is the comma-separated list of arguments to the C function. For a function with a fixed maximum number of arguments, provide a C argument for each Lisp argument. In this case, unlike regular C functions, the types of the arguments are not declared; they are simply always of type @code{Lisp_Object}. The names of the C arguments will be used as the names of the arguments to the Lisp primitive as displayed in its documentation, modulo the same concerns described above for @code{F...} names (in particular, underscores in the C arguments become dashes in the Lisp arguments). There is one additional kludge: A trailing @samp{_} on the C argument is discarded when forming the Lisp argument. This allows C language reserved words (like @code{default}) or global symbols (like @code{dirname}) to be used as argument names without compiler warnings or errors. A Lisp function with @w{@var{max_args} = @code{UNEVALLED}} is a @w{@dfn{special form}}; its arguments are not evaluated. Instead it receives one argument of type @code{Lisp_Object}, a (Lisp) list of the unevaluated arguments, conventionally named @code{(args)}. When a Lisp function has no upper limit on the number of arguments, specify @w{@var{max_args} = @code{MANY}}. In this case its implementation in C actually receives exactly two arguments: the number of Lisp arguments (an @code{int}) and the address of a block containing their values (a @w{@code{Lisp_Object *}}). In this case only are the C types specified in the @var{arglist}: @w{@code{(int nargs, Lisp_Object *args)}}. @end table Within the function @code{Fprog1} itself, note the use of the macros @code{GCPRO1} and @code{UNGCPRO}. @code{GCPRO1} is used to ``protect'' a variable from garbage collection---to inform the garbage collector that it must look in that variable and regard the object pointed at by its contents as an accessible object. This is necessary whenever you call @code{Feval} or anything that can directly or indirectly call @code{Feval} (this includes the @code{QUIT} macro!). At such a time, any Lisp object that you intend to refer to again must be protected somehow. @code{UNGCPRO} cancels the protection of the variables that are protected in the current function. It is necessary to do this explicitly. The macro @code{GCPRO1} protects just one local variable. If you want to protect two, use @code{GCPRO2} instead; repeating @code{GCPRO1} will not work. Macros @code{GCPRO3} and @code{GCPRO4} also exist. These macros implicitly use local variables such as @code{gcpro1}; you must declare these explicitly, with type @code{struct gcpro}. Thus, if you use @code{GCPRO2}, you must declare @code{gcpro1} and @code{gcpro2}. @cindex caller-protects (@code{GCPRO} rule) Note also that the general rule is @dfn{caller-protects}; i.e. you are only responsible for protecting those Lisp objects that you create. Any objects passed to you as arguments should have been protected by whoever created them, so you don't in general have to protect them. In particular, the arguments to any Lisp primitive are always automatically @code{GCPRO}ed, when called ``normally'' from Lisp code or bytecode. So only a few Lisp primitives that are called frequently from C code, such as @code{Fprogn} protect their arguments as a service to their caller. You don't need to protect your arguments when writing a new @code{DEFUN}. @code{GCPRO}ing is perhaps the trickiest and most error-prone part of XEmacs coding. It is @strong{extremely} important that you get this right and use a great deal of discipline when writing this code. @xref{GCPROing, ,@code{GCPRO}ing}, for full details on how to do this. What @code{DEFUN} actually does is declare a global structure of type @code{Lisp_Subr} whose name begins with capital @samp{SF} and which contains information about the primitive (e.g. a pointer to the function, its minimum and maximum allowed arguments, a string describing its Lisp name); @code{DEFUN} then begins a normal C function declaration using the @code{F...} name. The Lisp subr object that is the function definition of a primitive (i.e. the object in the function slot of the symbol that names the primitive) actually points to this @samp{SF} structure; when @code{Feval} encounters a subr, it looks in the structure to find out how to call the C function. Defining the C function is not enough to make a Lisp primitive available; you must also create the Lisp symbol for the primitive (the symbol is @dfn{interned}; @pxref{Obarrays}) and store a suitable subr object in its function cell. (If you don't do this, the primitive won't be seen by Lisp code.) The code looks like this: @example DEFSUBR (@var{fname}); @end example @noindent Here @var{fname} is the same name you used as the second argument to @code{DEFUN}. This call to @code{DEFSUBR} should go in the @code{syms_of_*()} function at the end of the module. If no such function exists, create it and make sure to also declare it in @file{symsinit.h} and call it from the appropriate spot in @code{main()}. @xref{Writing New Modules}. Note that C code cannot call functions by name unless they are defined in C. The way to call a function written in Lisp from C is to use @code{Ffuncall}, which embodies the Lisp function @code{funcall}. Since the Lisp function @code{funcall} accepts an unlimited number of arguments, in C it takes two: the number of Lisp-level arguments, and a one-dimensional array containing their values. The first Lisp-level argument is the Lisp function to call, and the rest are the arguments to pass to it. Since @code{Ffuncall} can call the evaluator, you must protect pointers from garbage collection around the call to @code{Ffuncall}. (However, @code{Ffuncall} explicitly protects all of its parameters, so you don't have to protect any pointers passed as parameters to it.) The C functions @code{call0}, @code{call1}, @code{call2}, and so on, provide handy ways to call a Lisp function conveniently with a fixed number of arguments. They work by calling @code{Ffuncall}. @file{eval.c} is a very good file to look through for examples; @file{lisp.h} contains the definitions for important macros and functions. @node Writing Good Comments, Adding Global Lisp Variables, Writing Lisp Primitives, Rules When Writing New C Code @section Writing Good Comments @cindex writing good comments @cindex comments, writing good Comments are a lifeline for programmers trying to understand tricky code. In general, the less obvious it is what you are doing, the more you need a comment, and the more detailed it needs to be. You should always be on guard when you're writing code for stuff that's tricky, and should constantly be putting yourself in someone else's shoes and asking if that person could figure out without much difficulty what's going on. (Assume they are a competent programmer who understands the essentials of how the XEmacs code is structured but doesn't know much about the module you're working on or any algorithms you're using.) If you're not sure whether they would be able to, add a comment. Always err on the side of more comments, rather than less. Generally, when making comments, there is no need to attribute them with your name or initials. This especially goes for small, easy-to-understand, non-opinionated ones. Also, comments indicating where, when, and by whom a file was changed are @emph{strongly} discouraged, and in general will be removed as they are discovered. This is exactly what @file{ChangeLogs} are there for. However, it can occasionally be useful to mark exactly where (but not when or by whom) changes are made, particularly when making small changes to a file imported from elsewhere. These marks help when later on a newer version of the file is imported and the changes need to be merged. (If everything were always kept in CVS, there would be no need for this. But in practice, this often doesn't happen, or the CVS repository is later on lost or unavailable to the person doing the update.) When putting in an explicit opinion in a comment, you should @emph{always} attribute it with your name and the date. This also goes for long, complex comments explaining in detail the workings of something -- by putting your name there, you make it possible for someone who has questions about how that thing works to determine who wrote the comment so they can write to them. Use your actual name or your alias at xemacs.org, and not your initials or nickname, unless that is generally recognized (e.g. @samp{jwz}). Even then, please consider requesting a virtual user at xemacs.org (forwarding address; we can't provide an actual mailbox). Otherwise, give first and last name. If you're not a regular contributor, you might consider putting your email address in -- it may be in the ChangeLog, but after awhile ChangeLogs have a tendency of disappearing or getting muddled. (E.g. your comment may get copied somewhere else or even into another program, and tracking down the proper ChangeLog may be very difficult.) If you come across an opinion that is not or is no longer valid, or you come across any comment that no longer applies but you want to keep it around, enclose it in @samp{[[ } and @samp{ ]]} marks and add a comment afterwards explaining why the preceding comment is no longer valid. Put your name on this comment, as explained above. Just as comments are a lifeline to programmers, incorrect comments are death. If you come across an incorrect comment, @strong{immediately} correct it or flag it as incorrect, as described in the previous paragraph. Whenever you work on a section of code, @emph{always} make sure to update any comments to be correct -- or, at the very least, flag them as incorrect. To indicate a "todo" or other problem, use four pound signs -- i.e. @samp{####}. @node Adding Global Lisp Variables, Writing Macros, Writing Good Comments, Rules When Writing New C Code @section Adding Global Lisp Variables @cindex global Lisp variables, adding @cindex variables, adding global Lisp Global variables whose names begin with @samp{Q} are constants whose value is a symbol of a particular name. The name of the variable should be derived from the name of the symbol using the same rules as for Lisp primitives. These variables are initialized using a call to @code{defsymbol()} in the @code{syms_of_*()} function. (This call interns a symbol, sets the C variable to the resulting Lisp object, and calls @code{staticpro()} on the C variable to tell the garbage-collection mechanism about this variable. What @code{staticpro()} does is add a pointer to the variable to a large global array; when garbage-collection happens, all pointers listed in the array are used as starting points for marking Lisp objects. This is important because it's quite possible that the only current reference to the object is the C variable. In the case of symbols, the @code{staticpro()} doesn't matter all that much because the symbol is contained in @code{obarray}, which is itself @code{staticpro()}ed. However, it's possible that a naughty user could do something like uninterning the symbol out of @code{obarray} or even setting @code{obarray} to a different value [although this is likely to make XEmacs crash!].) @strong{Please note:} It is potentially deadly if you declare a @samp{Q...} variable in two different modules. The two calls to @code{defsymbol()} are no problem, but some linkers will complain about multiply-defined symbols. The most insidious aspect of this is that often the link will succeed anyway, but then the resulting executable will sometimes crash in obscure ways during certain operations! To avoid this problem, declare any symbols with common names (such as @code{text}) that are not obviously associated with this particular module in the file @file{general-slots.h}. The ``-slots'' suffix indicates that this is a file that is included multiple times in @file{general.c}. Redefinition of preprocessor macros allows the effects to be different in each context, so this is actually more convenient and less error-prone than doing it in your module. Global variables whose names begin with @samp{V} are variables that contain Lisp objects. The convention here is that all global variables of type @code{Lisp_Object} begin with @samp{V}, and all others don't (including integer and boolean variables that have Lisp equivalents). Most of the time, these variables have equivalents in Lisp, but some don't. Those that do are declared this way by a call to @code{DEFVAR_LISP()} in the @code{vars_of_*()} initializer for the module. What this does is create a special @dfn{symbol-value-forward} Lisp object that contains a pointer to the C variable, intern a symbol whose name is as specified in the call to @code{DEFVAR_LISP()}, and set its value to the symbol-value-forward Lisp object; it also calls @code{staticpro()} on the C variable to tell the garbage-collection mechanism about the variable. When @code{eval} (or actually @code{symbol-value}) encounters this special object in the process of retrieving a variable's value, it follows the indirection to the C variable and gets its value. @code{setq} does similar things so that the C variable gets changed. Whether or not you @code{DEFVAR_LISP()} a variable, you need to initialize it in the @code{vars_of_*()} function; otherwise it will end up as all zeroes, which is the integer 0 (@emph{not} @code{nil}), and this is probably not what you want. Also, if the variable is not @code{DEFVAR_LISP()}ed, @strong{you must call} @code{staticpro()} on the C variable in the @code{vars_of_*()} function. Otherwise, the garbage-collection mechanism won't know that the object in this variable is in use, and will happily collect it and reuse its storage for another Lisp object, and you will be the one who's unhappy when you can't figure out how your variable got overwritten. @node Writing Macros, Proper Use of Unsigned Types, Adding Global Lisp Variables, Rules When Writing New C Code @section Writing Macros @cindex writing macros @cindex macros, writing Heavily used small code fragments need to be fast. The traditional way to implement such code fragments in C is with macros. But macros in C are known to be broken. @cindex macro hygiene Macro arguments that are repeatedly evaluated may suffer from repeated side effects or suboptimal performance. Variable names used in macros may collide with caller's variables, causing (at least) unwanted compiler warnings. In order to solve these problems, and maintain statement semantics, one should use the @code{do @{ ... @} while (0)} trick (which safely works inside of if statements) while trying to reference macro arguments exactly once using local variables. Let's take a look at this poor macro definition: @example #define MARK_OBJECT(obj) \ if (!marked_p (obj)) mark_object (obj), did_mark = 1 @end example This macro evaluates its argument twice, and also fails if used like this: @example if (flag) MARK_OBJECT (obj); else @code{do_something()}; @end example A much better definition is @example #define MARK_OBJECT(obj) do @{ \ Lisp_Object mo_obj = (obj); \ if (!marked_p (mo_obj)) \ @{ \ mark_object (mo_obj); \ did_mark = 1; \ @} \ @} while (0) @end example Notice the elimination of double evaluation by using the local variable with the obscure name. Writing safe and efficient macros requires great care. The one problem with macros that cannot be portably worked around is, since a C block has no value, a macro used as an expression rather than a statement cannot use the techniques just described to avoid multiple evaluation. @cindex inline functions In most cases where a macro has function semantics, an inline function is a better implementation technique. Modern compiler optimizers tend to inline functions even if they have no @code{inline} keyword, and configure magic ensures that the @code{inline} keyword can be safely used as an additional compiler hint. Inline functions used in a single .c files are easy. The function must already be defined to be @code{static}. Just add another @code{inline} keyword to the definition. @example inline static int heavily_used_small_function (int arg) @{ ... @} @end example Inline functions in header files are trickier, because we would like to make the following optimization if the function is @emph{not} inlined (for example, because we're compiling for debugging). We would like the function to be defined externally exactly once, and each calling translation unit would create an external reference to the function, instead of including a definition of the inline function in the object code of every translation unit that uses it. This optimization is currently only available for gcc. But you don't have to worry about the trickiness; just define your inline functions in header files using this pattern: @example DECLARE_INLINE_HEADER ( int i_used_to_be_a_crufty_macro_but_look_at_me_now (int arg) ) @{ ... @} @end example We use @code{DECLARE_INLINE_HEADER} rather than just the modifier @code{INLINE_HEADER} to prevent warnings when compiling with @code{gcc -Wmissing-declarations}. I consider issuing this warning for inline functions a gcc bug, but the gcc maintainers disagree. @cindex inline functions, headers @cindex header files, inline functions Every header which contains inline functions, either directly by using @code{DECLARE_INLINE_HEADER} or indirectly by using @code{DECLARE_LRECORD} must be added to @file{inline.c}'s includes to make the optimization described above work. (Optimization note: if all INLINE_HEADER functions are in fact inlined in all translation units, then the linker can just discard @code{inline.o}, since it contains only unreferenced code). The three golden rules of macros: @enumerate @item Anything that's an lvalue can be evaluated more than once. @item Macros where anything else can be evaluated more than once should have the word "unsafe" in their name (exceptions may be made for large sets of macros that evaluate arguments of certain types more than once, e.g. struct buffer * arguments, when clearly indicated in the macro documentation). These macros are generally meant to be called only by other macros that have already stored the calling values in temporary variables. @item Nothing else can be evaluated more than once. Use inline functions, if necessary, to prevent multiple evaluation. @end enumerate NOTE: The functions and macros below are given full prototypes in their docs, even when the implementation is a macro. In such cases, passing an argument of a type other than expected will produce undefined results. Also, given that macros can do things functions can't (in particular, directly modify arguments as if they were passed by reference), the declaration syntax has been extended to include the call-by-reference syntax from C++, where an & after a type indicates that the argument is an lvalue and is passed by reference, i.e. the function can modify its value. (This is equivalent in C to passing a pointer to the argument, but without the need to explicitly worry about pointers.) When to capitalize macros: @itemize @bullet @item Capitalize macros doing stuff obviously impossible with (C) functions, e.g. directly modifying arguments as if they were passed by reference. @item Capitalize macros that evaluate @strong{any} argument more than once regardless of whether that's "allowed" (e.g. buffer arguments). @item Capitalize macros that directly access a field in a Lisp_Object or its equivalent underlying structure. In such cases, access through the Lisp_Object precedes the macro with an X, and access through the underlying structure doesn't. @item Capitalize certain other basic macros relating to Lisp_Objects; e.g. FRAMEP, CHECK_FRAME, etc. @item Try to avoid capitalizing any other macros. @end itemize @node Proper Use of Unsigned Types, Major Textual Changes, Writing Macros, Rules When Writing New C Code @section Proper Use of Unsigned Types @cindex unsigned types, proper use of @cindex types, proper use of unsigned Avoid using @code{unsigned int} and @code{unsigned long} whenever possible. Unsigned types are viral -- any arithmetic or comparisons involving mixed signed and unsigned types are automatically converted to unsigned, which is almost certainly not what you want. Many subtle and hard-to-find bugs are created by careless use of unsigned types. In general, you should almost @emph{never} use an unsigned type to hold a regular quantity of any sort. The only exceptions are @enumerate @item When there's a reasonable possibility you will actually need all 32 or 64 bits to store the quantity. @item When calling existing API's that require unsigned types. In this case, you should still do all manipulation using signed types, and do the conversion at the very threshold of the API call. @item In existing code that you don't want to modify because you don't maintain it. @item In bit-field structures. @end enumerate Other reasonable uses of @code{unsigned int} and @code{unsigned long} are representing non-quantities -- e.g. bit-oriented flags and such. @node Major Textual Changes, Debugging and Testing, Proper Use of Unsigned Types, Rules When Writing New C Code @section Major Textual Changes @cindex textual changes, major @cindex major textual changes Sometimes major textual changes are made to the source. This means that a search-and-replace is done to change type names and such. Some people disagree with such changes, and certainly if done without good reason will just lead to headaches. But it's important to keep the code clean and understable, and consistent naming goes a long way towards this. An example of the right way to do this was the so-called "great integral type renaming". @menu * Great Integral Type Renaming:: * Text/Char Type Renaming:: @end menu @node Great Integral Type Renaming, Text/Char Type Renaming, Major Textual Changes, Major Textual Changes @subsection Great Integral Type Renaming @cindex Great Integral Type Renaming @cindex integral type renaming, great @cindex type renaming, integral @cindex renaming, integral types The purpose of this is to rationalize the names used for various integral types, so that they match their intended uses and follow consist conventions, and eliminate types that were not semantically different from each other. The conventions are: @itemize @bullet @item All integral types that measure quantities of anything are signed. Some people disagree vociferously with this, but their arguments are mostly theoretical, and are vastly outweighed by the practical headaches of mixing signed and unsigned values, and more importantly by the far increased likelihood of inadvertent bugs: Because of the broken "viral" nature of unsigned quantities in C (operations involving mixed signed/unsigned are done unsigned, when exactly the opposite is nearly always wanted), even a single error in declaring a quantity unsigned that should be signed, or even the even more subtle error of comparing signed and unsigned values and forgetting the necessary cast, can be catastrophic, as comparisons will yield wrong results. -Wsign-compare is turned on specifically to catch this, but this tends to result in a great number of warnings when mixing signed and unsigned, and the casts are annoying. More has been written on this elsewhere. @item All such quantity types just mentioned boil down to EMACS_INT, which is 32 bits on 32-bit machines and 64 bits on 64-bit machines. This is guaranteed to be the same size as Lisp objects of type @code{int}, and (as far as I can tell) of size_t (unsigned!) and ssize_t. The only type below that is not an EMACS_INT is Hashcode, which is an unsigned value of the same size as EMACS_INT. @item Type names should be relatively short (no more than 10 characters or so), with the first letter capitalized and no underscores if they can at all be avoided. @item "count" == a zero-based measurement of some quantity. Includes sizes, offsets, and indexes. @item "bpos" == a one-based measurement of a position in a buffer. "Charbpos" and "Bytebpos" count text in the buffer, rather than bytes in memory; thus Bytebpos does not directly correspond to the memory representation. Use "Membpos" for this. @item "Char" refers to internal-format characters, not to the C type "char", which is really a byte. @end itemize For the actual name changes, see the script below. I ran the following script to do the conversion. (NOTE: This script is idempotent. You can safely run it multiple times and it will not screw up previous results -- in fact, it will do nothing if nothing has changed. Thus, it can be run repeatedly as necessary to handle patches coming in from old workspaces, or old branches.) There are two tags, just before and just after the change: @samp{pre-integral-type-rename} and @samp{post-integral-type-rename}. When merging code from the main trunk into a branch, the best thing to do is first merge up to @samp{pre-integral-type-rename}, then apply the script and associated changes, then merge from @samp{post-integral-type-change} to the present. (Alternatively, just do the merging in one operation; but you may then have a lot of conflicts needing to be resolved by hand.) Script @samp{fixtypes.sh} follows: @example ----------------------------------- cut ------------------------------------ files="*.[ch] s/*.h m/*.h config.h.in ../configure.in Makefile.in.in ../lib-src/*.[ch] ../lwlib/*.[ch]" gr Memory_Count Bytecount $files gr Lstream_Data_Count Bytecount $files gr Element_Count Elemcount $files gr Hash_Code Hashcode $files gr extcount bytecount $files gr bufpos charbpos $files gr bytind bytebpos $files gr memind membpos $files gr bufbyte intbyte $files gr Extcount Bytecount $files gr Bufpos Charbpos $files gr Bytind Bytebpos $files gr Memind Membpos $files gr Bufbyte Intbyte $files gr EXTCOUNT BYTECOUNT $files gr BUFPOS CHARBPOS $files gr BYTIND BYTEBPOS $files gr MEMIND MEMBPOS $files gr BUFBYTE INTBYTE $files gr MEMORY_COUNT BYTECOUNT $files gr LSTREAM_DATA_COUNT BYTECOUNT $files gr ELEMENT_COUNT ELEMCOUNT $files gr HASH_CODE HASHCODE $files ----------------------------------- cut ------------------------------------ @end example The @samp{gr} script, and the scripts it uses, are documented in @file{README.global-renaming}, because if placed in this file they would need to have their @@ characters doubled, meaning you couldn't easily cut and paste from the source. In addition to those programs, I needed to fix up a few other things, particularly relating to the duplicate definitions of types, now that some types merged with others. Specifically: @enumerate @item in @file{lisp.h}, removed duplicate declarations of Bytecount. The changed code should now look like this: (In each code snippet below, the first and last lines are the same as the original, as are all lines outside of those lines. That allows you to locate the section to be replaced, and replace the stuff in that section, verifying that there isn't anything new added that would need to be kept.) @example --------------------------------- snip ------------------------------------- /* Counts of bytes or chars */ typedef EMACS_INT Bytecount; typedef EMACS_INT Charcount; /* Counts of elements */ typedef EMACS_INT Elemcount; /* Hash codes */ typedef unsigned long Hashcode; /* ------------------------ dynamic arrays ------------------- */ --------------------------------- snip ------------------------------------- @end example @item in @file{lstream.h}, removed duplicate declaration of Bytecount. Rewrote the comment about this type. The changed code should now look like this: @example --------------------------------- snip ------------------------------------- #endif /* The have been some arguments over the what the type should be that specifies a count of bytes in a data block to be written out or read in, using @code{Lstream_read()}, @code{Lstream_write()}, and related functions. Originally it was long, which worked fine; Martin "corrected" these to size_t and ssize_t on the grounds that this is theoretically cleaner and is in keeping with the C standards. Unfortunately, this practice is horribly error-prone due to design flaws in the way that mixed signed/unsigned arithmetic happens. In fact, by doing this change, Martin introduced a subtle but fatal error that caused the operation of sending large mail messages to the SMTP server under Windows to fail. By putting all values back to be signed, avoiding any signed/unsigned mixing, the bug immediately went away. The type then in use was Lstream_Data_Count, so that it be reverted cleanly if a vote came to that. Now it is Bytecount. Some earlier comments about why the type must be signed: This MUST BE SIGNED, since it also is used in functions that return the number of bytes actually read to or written from in an operation, and these functions can return -1 to signal error. Note that the standard Unix @code{read()} and @code{write()} functions define the count going in as a size_t, which is UNSIGNED, and the count going out as an ssize_t, which is SIGNED. This is a horrible design flaw. Not only is it highly likely to lead to logic errors when a -1 gets interpreted as a large positive number, but operations are bound to fail in all sorts of horrible ways when a number in the upper-half of the size_t range is passed in -- this number is unrepresentable as an ssize_t, so code that checks to see how many bytes are actually written (which is mandatory if you are dealing with certain types of devices) will get completely screwed up. --ben */ typedef enum lstream_buffering --------------------------------- snip ------------------------------------- @end example @item in @file{dumper.c}, there are four places, all inside of @code{switch()} statements, where XD_BYTECOUNT appears twice as a case tag. In each case, the two case blocks contain identical code, and you should *REMOVE THE SECOND* and leave the first. @end enumerate @node Text/Char Type Renaming, , Great Integral Type Renaming, Major Textual Changes @subsection Text/Char Type Renaming @cindex Text/Char Type Renaming @cindex type renaming, text/char @cindex renaming, text/char types The purpose of this was @enumerate @item To distinguish between ``charptr'' when it refers to operations on the pointer itself and when it refers to operations on text @item To use consistent naming for everything referring to internal format, i.e. @end enumerate @example Itext == text in internal format Ibyte == a byte in such text Ichar == a char as represented in internal character format @end example Thus e.g. @example set_charptr_emchar -> set_itext_ichar @end example This was done using a script like this: @example files="*.[ch] s/*.h m/*.h config.h.in ../configure.in Makefile.in.in ../lib-src/*.[ch] ../lwlib/*.[ch]" gr Intbyte Ibyte $files gr INTBYTE IBYTE $files gr intbyte ibyte $files gr EMCHAR ICHAR $files gr emchar ichar $files gr Emchar Ichar $files gr INC_CHARPTR INC_IBYTEPTR $files gr DEC_CHARPTR DEC_IBYTEPTR $files gr VALIDATE_CHARPTR VALIDATE_IBYTEPTR $files gr valid_charptr valid_ibyteptr $files gr CHARPTR ITEXT $files gr charptr itext $files gr Charptr Itext $files @end example See above for the source to @samp{gr}. As in the integral-types change, there are pre and post tags before and after the change: @example pre-internal-format-textual-renaming post-internal-format-textual-renaming @end example When merging a large branch, follow the same sort of procedure documented above, using these tags -- essentially sync up to the pre tag, then apply the script yourself, then sync from the post tag to the present. You can probably do the same if you don't have a separate workspace, but do have lots of outstanding changes and you'd rather not just merge all the textual changes directly. Use something like this: (WARNING: I'm not a CVS guru; before trying this, or any large operation that might potentially mess things up, @strong{DEFINITELY} make a backup of your existing workspace.) @example cup -r pre-internal-format-textual-renaming <apply script> cup -A -j post-internal-format-textual-renaming -j HEAD @end example This might also work: @example cup -j pre-internal-format-textual-renaming <apply script> cup -j post-internal-format-textual-renaming -j HEAD @end example ben The following is a script to go in the opposite direction: @example files="*.[ch] s/*.h m/*.h config.h.in ../configure.in Makefile.in.in ../lib-src/*.[ch] ../lwlib/*.[ch]" # Evidently Perl considers _ to be a word char ala \b, even though XEmacs # doesn't. We need to be careful here with ibyte/ichar because of words # like Richard, @code{eicharlen()}, multibyte, HIBYTE, etc. gr Ibyte Intbyte $files gr '\bIBYTE' INTBYTE $files gr '\bibyte' intbyte $files gr '\bICHAR' EMCHAR $files gr '\bichar' emchar $files gr '\bIchar' Emchar $files gr '\bIBYTEPTR' CHARPTR $files gr '\bibyteptr' charptr $files gr '\bITEXT' CHARPTR $files gr '\bitext' charptr $files gr '\bItext' CHARPTR $files gr '_IBYTE' _INTBYTE $files gr '_ibyte' _intbyte $files gr '_ICHAR' _EMCHAR $files gr '_ichar' _emchar $files gr '_Ichar' _Emchar $files gr '_IBYTEPTR' _CHARPTR $files gr '_ibyteptr' _charptr $files gr '_ITEXT' _CHARPTR $files gr '_itext' _charptr $files gr '_Itext' _CHARPTR $files @end example @node Debugging and Testing, , Major Textual Changes, Rules When Writing New C Code @section Debugging and Testing @cindex debugging and testing @cindex Purify @cindex Quantify To make a purified XEmacs, do: @code{make puremacs}. To make a quantified XEmacs, do: @code{make quantmacs}. You simply can't dump Quantified and Purified images (unless using the portable dumper). Purify gets confused when xemacs frees memory in one process that was allocated in a @emph{different} process on a different machine! Run it like so: @example temacs -batch -l loadup.el run-temacs @var{xemacs-args...} @end example @cindex error checking Before you go through the trouble, are you compiling with all debugging and error-checking off? If not, try that first. Be warned that while Quantify is directly responsible for quite a few optimizations which have been made to XEmacs, doing a run which generates results which can be acted upon is not necessarily a trivial task. Also, if you're still willing to do some runs make sure you configure with the @samp{--quantify} flag. That will keep Quantify from starting to record data until after the loadup is completed and will shut off recording right before it shuts down (which generates enough bogus data to throw most results off). It also enables three additional elisp commands: @code{quantify-start-recording-data}, @code{quantify-stop-recording-data} and @code{quantify-clear-data}. If you want to make XEmacs faster, target your favorite slow benchmark, run a profiler like Quantify, @code{gprof}, or @code{tcov}, and figure out where the cycles are going. In many cases you can localize the problem (because a particular new feature or even a single patch elicited it). Don't hesitate to use brute force techniques like a global counter incremented at strategic places, especially in combination with other performance indications (@emph{e.g.}, degree of buffer fragmentation into extents). Specific projects: @itemize @bullet @item Make the garbage collector faster. Figure out how to write an incremental garbage collector. @item Write a compiler that takes bytecode and spits out C code. Unfortunately, you will then need a C compiler and a more fully developed module system. @item Speed up redisplay. @item Speed up syntax highlighting. It was suggested that ``maybe moving some of the syntax highlighting capabilities into C would make a difference.'' Wrong idea, I think. When processing one 400kB file a particular low-level routine was being called 40 @emph{million} times simply for @emph{one} call to @code{newline-and-indent}. Syntax highlighting needs to be rewritten to use a reliable, fast parser, then to trust the pre-parsed structure, and only do re-highlighting locally to a text change. Modern machines are fast enough to implement such parsers in Lisp; but no machine will ever be fast enough to deal with quadratic (or worse) algorithms! @item Implement tail recursion in Emacs Lisp (hard!). @end itemize Unfortunately, Emacs Lisp is slow, and is going to stay slow. Function calls in elisp are especially expensive. Iterating over a long list is going to be 30 times faster implemented in C than in Elisp. To get started debugging XEmacs, take a look at the @file{.gdbinit} and @file{.dbxrc} files in the @file{src} directory. See the section in the XEmacs FAQ on How to Debug an XEmacs problem with a debugger. After making source code changes, run @code{make check} to ensure that you haven't introduced any regressions. If you want to make xemacs more reliable, please improve the test suite in @file{tests/automated}. Did you make sure you didn't introduce any new compiler warnings? Before submitting a patch, please try compiling at least once with @example configure --with-mule --use-union-type --error-checking=all @end example @node Regression Testing XEmacs, CVS Techniques, Rules When Writing New C Code, Top @chapter Regression Testing XEmacs @cindex testing, regression @menu * How to Regression-Test:: * Modules for Regression Testing:: @end menu @node How to Regression-Test, Modules for Regression Testing, Regression Testing XEmacs, Regression Testing XEmacs @section How to Regression-Test @cindex how to regression-test @cindex regression-test, how to @cindex testing, regression, how to The source directory @file{tests/automated} contains XEmacs' automated test suite. The usual way of running all the tests is running @code{make check} from the top-level build directory. The test suite is unfinished and it's still lacking some essential features. It is nevertheless recommended that you run the tests to confirm that XEmacs behaves correctly. If you want to run a specific test case, you can do it from the command-line like this: @example $ xemacs -batch -l test-harness.elc -f batch-test-emacs TEST-FILE @end example If a test fails and you need more information, you can run the test suite interactively by loading @file{test-harness.el} into a running XEmacs and typing @kbd{M-x test-emacs-test-file RET <filename> RET}. You will see a log of passed and failed tests, which should allow you to investigate the source of the error and ultimately fix the bug. If you are not capable of, or don't have time for, debugging it yourself, please do report the failures using @kbd{M-x report-emacs-bug} or @kbd{M-x build-report}. @deffn Command test-emacs-test-file file Runs the tests in @var{file}. @file{test-harness.el} must be loaded. Defines all the macros described in this node, and undefines them when done. @end deffn Adding a new test file is trivial: just create a new file here and it will be run. There is no need to byte-compile any of the files in this directory---the test-harness will take care of any necessary byte-compilation. Look at the existing test cases for the examples of coding test cases. It all boils down to your imagination and judicious use of the macros @code{Assert}, @code{Check-Error}, @code{Check-Error-Message}, and @code{Check-Message}. Note that all of these macros are defined only for the duration of the test: they do not exist in the global environment. @deffn Macro Assert expr Check that @var{expr} is non-nil at this point in the test. @end deffn @deffn Macro Check-Error expected-error body Check that execution of @var{body} causes @var{expected-error} to be signaled. @var{body} is a @code{progn}-like body, and may contain several expressions. @var{expected-error} is a symbol defined as an error by @code{define-error}. @end deffn @deffn Macro Check-Error-Message expected-error expected-error-regexp body Check that execution of @var{body} causes @var{expected-error} to be signaled, and generate a message matching @var{expected-error-regexp}. @var{body} is a @code{progn}-like body, and may contain several expressions. @var{expected-error} is a symbol defined as an error by @code{define-error}. @end deffn @deffn Macro Check-Message expected-message body Check that execution of @var{body} causes @var{expected-message} to be generated (using @code{message} or a similar function). @var{body} is a @code{progn}-like body, and may contain several expressions. @end deffn Here's a simple example checking case-sensitive and case-insensitive comparisons from @file{case-tests.el}. @example (with-temp-buffer (insert "Test Buffer") (let ((case-fold-search t)) (goto-char (point-min)) (Assert (eq (search-forward "test buffer" nil t) 12)) (goto-char (point-min)) (Assert (eq (search-forward "Test buffer" nil t) 12)) (goto-char (point-min)) (Assert (eq (search-forward "Test Buffer" nil t) 12)) (setq case-fold-search nil) (goto-char (point-min)) (Assert (not (search-forward "test buffer" nil t))) (goto-char (point-min)) (Assert (not (search-forward "Test buffer" nil t))) (goto-char (point-min)) (Assert (eq (search-forward "Test Buffer" nil t) 12)))) @end example This example could be saved in a file in @file{tests/automated}, and it would constitute a complete test, automatically executed when you run @kbd{make check} after building XEmacs. More complex tests may require substantial temporary scaffolding to create the environment that elicits the bugs, but the top-level @file{Makefile} and @file{test-harness.el} handle the running and collection of results from the @code{Assert}, @code{Check-Error}, @code{Check-Error-Message}, and @code{Check-Message} macros. Don't suppress tests just because they're due to known bugs not yet fixed---use the @code{Known-Bug-Expect-Failure} wrapper macro to mark them. @deffn Macro Known-Bug-Expect-Failure body Arrange for failing tests in @var{body} to generate messages prefixed with "KNOWN BUG:" instead of "FAIL:". @var{body} is a @code{progn}-like body, and may contain several tests. @end deffn A lot of the tests we run push limits; suppress Ebola warning messages with the @code{Ignore-Ebola} wrapper macro. @deffn Macro Ignore-Ebola body Suppress Ebola warning messages while running tests in @var{body}. @var{body} is a @code{progn}-like body, and may contain several tests. @end deffn Both macros are defined temporarily within the test function. Simple examples: @example ;; Apparently Ignore-Ebola is a solution with no problem to address. ;; There are no examples in 21.5, anyway. ;; from regexp-tests.el (Known-Bug-Expect-Failure (Assert (not (string-match "\\b" ""))) (Assert (not (string-match " \\b" " ")))) @end example In general, you should avoid using functionality from packages in your tests, because you can't be sure that everyone will have the required package. However, if you've got a test that works, by all means add it. Simply wrap the test in an appropriate test, add a notice that the test was skipped, and update the @code{skipped-test-reasons} hashtable. The wrapper macro @code{Skip-Test-Unless} is provided to handle common cases. @defvar skipped-test-reasons Hash table counting the number of times a particular reason is given for skipping tests. This is only defined within @code{test-emacs-test-file}. @end defvar @deffn Macro Skip-Test-Unless prerequisite reason description body @var{prerequisite} is usually a feature test (@code{featurep}, @code{boundp}, @code{fboundp}). @var{reason} is a string describing the prerequisite; it must be unique because it is used as a hash key in a table of reasons for skipping tests. @var{description} describes the tests being skipped, for the test result summary. @var{body} is a @code{progn}-like body, and may contain several tests. @end deffn @code{Skip-Test-Unless} is defined temporarily within the test function. Here's an example of usage from @file{syntax-tests.el}: @example ;; Test forward-comment at buffer boundaries (with-temp-buffer ;; try to use exactly what you need: featurep, boundp, fboundp (Skip-Test-Unless (fboundp 'c-mode) "c-mode unavailable" "comment and parse-partial-sexp tests" ;; and here's the test code (c-mode) (insert "// comment\n") (forward-comment -2) (Assert (eq (point) (point-min))) (let ((point (point))) (insert "/* comment */") (goto-char point) (forward-comment 2) (Assert (eq (point) (point-max))) (parse-partial-sexp point (point-max))))) @end example @code{Skip-Test-Unless} is intended for use with features that are normally present in typical configurations. For truly optional features, or tests that apply to one of several alternative implementations (eg, to GTK widgets, but not Athena, Motif, MS Windows, or Carbon), simply silently suppress the test if the feature is not available. Here are a few general hints for writing tests. @enumerate @item Include related successful cases. Fixes often break something. @item Use the Known-Bug-Expect-Failure macro to mark the cases you know are going to fail. We want to be able to distinguish between regressions and other unexpected failures, and cases that have been (partially) analyzed but not yet repaired. @item Mark the bug with the date of report. An ``Unfixed since yyyy-mm-dd'' gloss for Known-Bug-Expect-Failure is planned to further increase developer embarrassment (== incentive to fix the bug), but until then at least put a comment about the date so we can easily see when it was first reported. @item It's a matter of your judgement, but you should often use generic tests (@emph{e.g.}, @code{eq}) instead of more specific tests (@code{=} for numbers) even though you know that arguments ``should'' be of correct type. That is, if the functions used can return generic objects (typically @code{nil}), as well as some more specific type that will be returned on success. We don't want failures of those assertions reported as ``other failures'' (a wrong-type-arg signal, rather than a null return), we want them reported as ``assertion failures.'' One example is a test that tests @code{(= (string-match this that) 0)}, expecting a successful match. Now suppose @code{string-match} is broken such that the match fails. Then it will return @code{nil}, and @code{=} will signal ``wrong-type-argument, number-char-or-marker-p, nil'', generating an ``other failure'' in the report. But this should be reported as an assertion failure (the test failed in a foreseeable way), rather than something else (we don't know what happened because XEmacs is broken in a way that we weren't trying to test!) @end enumerate @node Modules for Regression Testing, , How to Regression-Test, Regression Testing XEmacs @section Modules for Regression Testing @cindex modules for regression testing @cindex regression testing, modules for @example @file{test-harness.el} @file{base64-tests.el} @file{byte-compiler-tests.el} @file{case-tests.el} @file{ccl-tests.el} @file{c-tests.el} @file{database-tests.el} @file{extent-tests.el} @file{hash-table-tests.el} @file{lisp-tests.el} @file{md5-tests.el} @file{mule-tests.el} @file{regexp-tests.el} @file{symbol-tests.el} @file{syntax-tests.el} @file{tag-tests.el} @file{weak-tests.el} @end example @file{test-harness.el} defines the macros @code{Assert}, @code{Check-Error}, @code{Check-Error-Message}, and @code{Check-Message}. The other files are test files, testing various XEmacs facilities. @xref{Regression Testing XEmacs}. @node CVS Techniques, XEmacs from the Inside, Regression Testing XEmacs, Top @chapter CVS Techniques @cindex CVS techniques @menu * Merging a Branch into the Trunk:: @end menu @node Merging a Branch into the Trunk, , CVS Techniques, CVS Techniques @section Merging a Branch into the Trunk @cindex merging a branch into the trunk @enumerate @item If you haven't already done a merge, you will be merging from the branch point; otherwise you'll be merging from the last merge point, which should be marked by a tag, e.g. @samp{last-sync-ben-mule-21-5}. In the former case, create the last-sync tag, e.g. @example crw rtag -r ben-mule-21-5-bp last-sync-ben-mule-21-5 xemacs @end example (You did create a branch point tag when you created the branch, didn't you?) @item Check everything in on your branch. @item Tag your branch with a pre-sync tag, e.g. @example crw rtag -r ben-mule-21-5 ben-mule-21-5-pre-feb-20-2002-sync xemacs @end example Note, you need to use rtag and specify a version with @samp{-r} (use @samp{-r HEAD} if necessary) so that removed files are handled correctly in some obscure cases. See section 4.8 of the CVS manual. @item Tag the trunk so you have a stable place to merge up to in case people are asynchronously committing to the trunk, e.g. @example crw rtag -r HEAD main-branch-ben-mule-21-5-syncpoint-feb-20-2002 xemacs crw rtag -F -r main-branch-ben-mule-21-5-syncpoint-feb-20-2002 next-sync-ben-mule-21-5 xemacs @end example Use -F in the second case because the name might already exist, e.g. if you've already done a merge. We make two tags because one is a permanent mark indicating a syncpoint when merging, and the other is a symbolic tag to make other operations easier. @item Make a backup of your source tree (not totally necessary but useful for reference and peace of mind): Move one level up from the top directory of your branch and do, e.g. @example cp -a mule mule-backup-2-23-02 @end example @item Now, we're ready to merge! Make sure you're in the top directory of your branch and do, e.g. @example cvs update -j last-sync-ben-mule-21-5 -j next-sync-ben-mule-21-5 @end example @item Fix all merge conflicts. Get the sucker to compile and run. @item Tag your branch with a post-sync tag, e.g. @example crw rtag -r ben-mule-21-5 ben-mule-21-5-post-feb-20-2002-sync xemacs @end example @item Update the last-sync tag, e.g. @example crw rtag -F -r next-sync-ben-mule-21-5 last-sync-ben-mule-21-5 xemacs @end example @end enumerate @node XEmacs from the Inside, Basic Types, CVS Techniques, Top @chapter XEmacs from the Inside @cindex XEmacs from the inside @cindex inside, XEmacs from the Internally, XEmacs is quite complex, and can be very confusing. To simplify things, it can be useful to think of XEmacs as containing an event loop that ``drives'' everything, and a number of other subsystems, such as a Lisp engine and a redisplay mechanism. Each of these other subsystems exists simultaneously in XEmacs, and each has a certain state. The flow of control continually passes in and out of these different subsystems in the course of normal operation of the editor. It is important to keep in mind that, most of the time, the editor is ``driven'' by the event loop. Except during initialization and batch mode, all subsystems are entered directly or indirectly through the event loop, and ultimately, control exits out of all subsystems back up to the event loop. This cycle of entering a subsystem, exiting back out to the event loop, and starting another iteration of the event loop occurs once each keystroke, mouse motion, etc. If you're trying to understand a particular subsystem (other than the event loop), think of it as a ``daemon'' process or ``servant'' that is responsible for one particular aspect of a larger system, and periodically receives commands or environment changes that cause it to do something. Ultimately, these commands and environment changes are always triggered by the event loop. For example: @itemize @bullet @item The window and frame mechanism is responsible for keeping track of what windows and frames exist, what buffers are in them, etc. It is periodically given commands (usually from the user) to make a change to the current window/frame state: i.e. create a new frame, delete a window, etc. @item The buffer mechanism is responsible for keeping track of what buffers exist and what text is in them. It is periodically given commands (usually from the user) to insert or delete text, create a buffer, etc. When it receives a text-change command, it notifies the redisplay mechanism. @item The redisplay mechanism is responsible for making sure that windows and frames are displayed correctly. It is periodically told (by the event loop) to actually ``do its job'', i.e. snoop around and see what the current state of the environment (mostly of the currently-existing windows, frames, and buffers) is, and make sure that state matches what's actually displayed. It keeps lots and lots of information around (such as what is actually being displayed currently, and what the environment was last time it checked) so that it can minimize the work it has to do. It is also helped along in that whenever a relevant change to the environment occurs, the redisplay mechanism is told about this, so it has a pretty good idea of where it has to look to find possible changes and doesn't have to look everywhere. @item The Lisp engine is responsible for executing the Lisp code in which most user commands are written. It is entered through a call to @code{eval} or @code{funcall}, which occurs as a result of dispatching an event from the event loop. The functions it calls issue commands to the buffer mechanism, the window/frame subsystem, etc. @item The Lisp allocation subsystem is responsible for keeping track of Lisp objects. It is given commands from the Lisp engine to allocate objects, garbage collect, etc. @end itemize etc. The important idea here is that there are a number of independent subsystems each with its own responsibility and persistent state, just like different employees in a company, and each subsystem is periodically given commands from other subsystems. Commands can flow from any one subsystem to any other, but there is usually some sort of hierarchy, with all commands originating from the event subsystem. XEmacs is entered in @code{main()}, which is in @file{emacs.c}. When this is called the first time (in a properly-invoked @file{temacs}), it does the following: @enumerate @item It does some very basic environment initializations, such as determining where it and its directories (e.g. @file{lisp/} and @file{etc/}) reside and setting up signal handlers. @item It initializes the entire Lisp interpreter. @item It sets the initial values of many built-in variables (including many variables that are visible to Lisp programs), such as the global keymap object and the built-in faces (a face is an object that describes the display characteristics of text). This involves creating Lisp objects and thus is dependent on step (2). @item It performs various other initializations that are relevant to the particular environment it is running in, such as retrieving environment variables, determining the current date and the user who is running the program, examining its standard input, creating any necessary file descriptors, etc. @item At this point, the C initialization is complete. A Lisp program that was specified on the command line (usually @file{loadup.el}) is called (temacs is normally invoked as @code{temacs -batch -l loadup.el dump}). @file{loadup.el} loads all of the other Lisp files that are needed for the operation of the editor, calls the @code{dump-emacs} function to write out @file{xemacs}, and then kills the temacs process. @end enumerate When @file{xemacs} is then run, it only redoes steps (1) and (4) above; all variables already contain the values they were set to when the executable was dumped, and all memory that was allocated with @code{malloc()} is still around. (XEmacs knows whether it is being run as @file{xemacs} or @file{temacs} because it sets the global variable @code{initialized} to 1 after step (4) above.) At this point, @file{xemacs} calls a Lisp function to do any further initialization, which includes parsing the command-line (the C code can only do limited command-line parsing, which includes looking for the @samp{-batch} and @samp{-l} flags and a few other flags that it needs to know about before initialization is complete), creating the first frame (or @dfn{window} in standard window-system parlance), running the user's init file (usually the file @file{.emacs} in the user's home directory), etc. The function to do this is usually called @code{normal-top-level}; @file{loadup.el} tells the C code about this function by setting its name as the value of the Lisp variable @code{top-level}. When the Lisp initialization code is done, the C code enters the event loop, and stays there for the duration of the XEmacs process. The code for the event loop is contained in @file{cmdloop.c}, and is called @code{Fcommand_loop_1()}. Note that this event loop could very well be written in Lisp, and in fact a Lisp version exists; but apparently, doing this makes XEmacs run noticeably slower. Notice how much of the initialization is done in Lisp, not in C. In general, XEmacs tries to move as much code as is possible into Lisp. Code that remains in C is code that implements the Lisp interpreter itself, or code that needs to be very fast, or code that needs to do system calls or other such stuff that needs to be done in C, or code that needs to have access to ``forbidden'' structures. (One conscious aspect of the design of Lisp under XEmacs is a clean separation between the external interface to a Lisp object's functionality and its internal implementation. Part of this design is that Lisp programs are forbidden from accessing the contents of the object other than through using a standard API. In this respect, XEmacs Lisp is similar to modern Lisp dialects but differs from GNU Emacs, which tends to expose the implementation and allow Lisp programs to look at it directly. The major advantage of hiding the implementation is that it allows the implementation to be redesigned without affecting any Lisp programs, including those that might want to be ``clever'' by looking directly at the object's contents and possibly manipulating them.) Moving code into Lisp makes the code easier to debug and maintain and makes it much easier for people who are not XEmacs developers to customize XEmacs, because they can make a change with much less chance of obscure and unwanted interactions occurring than if they were to change the C code. @node Basic Types, Low-Level Allocation, XEmacs from the Inside, Top @chapter Basic Types @cindex basic types @cindex types, basic Not yet documented. @node Low-Level Allocation, The XEmacs Object System (Abstractly Speaking), Basic Types, Top @chapter Low-Level Allocation @cindex low-level allocation @cindex allocation, low-level @menu * Basic Heap Allocation:: * Stack Allocation:: * Dynamic Arrays:: * Allocation by Blocks:: * Modules for Allocation:: @end menu @node Basic Heap Allocation, Stack Allocation, Low-Level Allocation, Low-Level Allocation @section Basic Heap Allocation @cindex basic heap allocation @node Stack Allocation, Dynamic Arrays, Basic Heap Allocation, Low-Level Allocation @section Stack Allocation @cindex stack allocation @node Dynamic Arrays, Allocation by Blocks, Stack Allocation, Low-Level Allocation @section Dynamic Arrays @cindex dynamic arrays @cindex dynamic array The @code{Dynarr} type implements a @dfn{dynamic array}, which is similar to a standard C array but has no fixed limit on the number of elements it can contain. Dynamic arrays can hold elements of any type, and when you add a new element, the array automatically resizes itself if it isn't big enough. Dynarrs are extensively used in the redisplay mechanism. A "dynamic array" is a contiguous array of fixed-size elements where there is no upper limit (except available memory) on the number of elements in the array. Because the elements are maintained contiguously, space is used efficiently (no per-element pointers necessary) and random access to a particular element is in constant time. At any one point, the block of memory that holds the array has an upper limit; if this limit is exceeded, the memory is realloc()ed into a new array that is twice as big. Assuming that the time to grow the array is on the order of the new size of the array block, this scheme has a provably constant amortized time (i.e. average time over all additions). When you add elements or retrieve elements, pointers are used. Note that the element itself (of whatever size it is), and not the pointer to it, is stored in the array; thus you do not have to allocate any heap memory on your own. Also, returned pointers are only guaranteed to be valid until the next operation that changes the length of the array. This is a container object. Declare a dynamic array of a specific type as follows: typedef struct @{ Dynarr_declare (mytype); @} mytype_dynarr; Use the following functions/macros: @example void *Dynarr_new(type) [MACRO] Create a new dynamic-array object, with each element of the specified type. The return value is cast to (type##_dynarr). This requires following the convention that types are declared in such a way that this type concatenation works. In particular, TYPE must be a symbol, not an arbitrary C type. Dynarr_add(d, el) [MACRO] Add an element to the end of a dynamic array. EL is a pointer to the element; the element itself is stored in the array, however. No function call is performed unless the array needs to be resized. Dynarr_add_many(d, base, len) [MACRO] Add LEN elements to the end of the dynamic array. The elements should be contiguous in memory, starting at BASE. If BASE if NULL, just make space for the elements; don't actually add them. Dynarr_insert_many_at_start(d, base, len) [MACRO] Append LEN elements to the beginning of the dynamic array. The elements should be contiguous in memory, starting at BASE. If BASE if NULL, just make space for the elements; don't actually add them. Dynarr_insert_many(d, base, len, start) Insert LEN elements to the dynamic array starting at position START. The elements should be contiguous in memory, starting at BASE. If BASE if NULL, just make space for the elements; don't actually add them. Dynarr_delete(d, i) [MACRO] Delete an element from the dynamic array at position I. Dynarr_delete_many(d, start, len) Delete LEN elements from the dynamic array starting at position START. Dynarr_delete_by_pointer(d, p) [MACRO] Delete an element from the dynamic array at pointer P, which must point within the block of memory that stores the data. P should be obtained using Dynarr_atp(). int Dynarr_length(d) [MACRO] Return the number of elements currently in a dynamic array. int Dynarr_largest(d) [MACRO] Return the maximum value that Dynarr_length(d) would ever have returned. type Dynarr_at(d, i) [MACRO] Return the element at the specified index (no bounds checking done on the index). The element itself is returned, not a pointer to it. type *Dynarr_atp(d, i) [MACRO] Return a pointer to the element at the specified index (no bounds checking done on the index). The pointer may not be valid after an element is added to or removed from the array. Dynarr_reset(d) [MACRO] Reset the length of a dynamic array to 0. Dynarr_free(d) Destroy a dynamic array and the memory allocated to it. @end example Use the following global variable: @example Dynarr_min_size Minimum allowable size for a dynamic array when it is resized. @end example @node Allocation by Blocks, Modules for Allocation, Dynamic Arrays, Low-Level Allocation @section Allocation by Blocks @cindex allocation by blocks The @code{Blocktype} type efficiently manages the allocation of fixed-size blocks by minimizing the number of times that @code{malloc()} and @code{free()} are called. It allocates memory in large chunks, subdivides the chunks into blocks of the proper size, and returns the blocks as requested. When blocks are freed, they are placed onto a linked list, so they can be efficiently reused. This data type is not much used in XEmacs currently, because it's a fairly new addition. A "block-type object" is used to efficiently allocate and free blocks of a particular size. Freed blocks are remembered in a free list and are reused as necessary to allocate new blocks, so as to avoid as much as possible making calls to malloc() and free(). This is a container object. Declare a block-type object of a specific type as follows: struct mytype_blocktype @{ Blocktype_declare (mytype); @}; Use the following functions/macros: @example structype *Blocktype_new(structype) [MACRO] Create a new block-type object of the specified type. The argument to this call should be the type of object to be created, e.g. foobar_blocktype. type *Blocktype_alloc(b) [MACRO] Allocate a block of the proper type for the specified block-type object and return a pointer to it. Blocktype_free(b, block) Free a block of the type corresponding to the specified block-type object. Blocktype_delete(b) Destroy a block-type object and the memory allocated to it. @end example @node Modules for Allocation, , Allocation by Blocks, Low-Level Allocation @section Modules for Allocation @cindex modules for allocation @example @file{alloca.c} @file{free-hook.c} @file{getpagesize.h} @file{gmalloc.c} @file{malloc.c} @file{mem-limits.h} @file{ralloc.c} @file{vm-limit.c} @end example These handle basic C allocation of memory. @file{alloca.c} is an emulation of the stack allocation function @code{alloca()} on machines that lack this. (XEmacs makes extensive use of @code{alloca()} in its code.) @file{gmalloc.c} and @file{malloc.c} are two implementations of the standard C functions @code{malloc()}, @code{realloc()} and @code{free()}. They are often used in place of the standard system-provided @code{malloc()} because they usually provide a much faster implementation, at the expense of additional memory use. @file{gmalloc.c} is a newer implementation that is much more memory-efficient for large allocations than @file{malloc.c}, and should always be preferred if it works. (At one point, @file{gmalloc.c} didn't work on some systems where @file{malloc.c} worked; but this should be fixed now.) @cindex relocating allocator @file{ralloc.c} is the @dfn{relocating allocator}. It provides functions similar to @code{malloc()}, @code{realloc()} and @code{free()} that allocate memory that can be dynamically relocated in memory. The advantage of this is that allocated memory can be shuffled around to place all the free memory at the end of the heap, and the heap can then be shrunk, releasing the memory back to the operating system. The use of this can be controlled with the configure option @code{--rel-alloc}; if enabled, memory allocated for buffers will be relocatable, so that if a very large file is visited and the buffer is later killed, the memory can be released to the operating system. (The disadvantage of this mechanism is that it can be very slow. On systems with the @code{mmap()} system call, the XEmacs version of @file{ralloc.c} uses this to move memory around without actually having to block-copy it, which can speed things up; but it can still cause noticeable performance degradation.) On Linux systems using @samp{glibc 2}, these strategies are built in to the so-called ``Doug Lea malloc.'' See, for example, Doug Lea's home page, especially @uref{http://gee.cs.oswego.edu/dl/html/malloc.html,``A Memory Allocator''}. The source file, @file{malloc.c} (available at the same place) is copiously (and usefully!) commented. @uref{http://www.malloc.de/,Wolfram Gloger's home page} may also be useful. @file{free-hook.c} contains some debugging functions for checking for invalid arguments to @code{free()}. @file{vm-limit.c} contains some functions that warn the user when memory is getting low. These are callback functions that are called by @file{gmalloc.c} and @file{malloc.c} at appropriate times. @file{getpagesize.h} provides a uniform interface for retrieving the size of a page in virtual memory. @file{mem-limits.h} provides a uniform interface for retrieving the total amount of available virtual memory. Both are similar in spirit to the @file{sys*.h} files described in section J, below. @example @file{blocktype.c} @file{blocktype.h} @file{dynarr.c} @end example These implement a couple of basic C data types to facilitate memory allocation. @node The XEmacs Object System (Abstractly Speaking), How Lisp Objects Are Represented in C, Low-Level Allocation, Top @chapter The XEmacs Object System (Abstractly Speaking) @cindex XEmacs object system (abstractly speaking), the @cindex object system (abstractly speaking), the XEmacs At the heart of the Lisp interpreter is its management of objects. XEmacs Lisp contains many built-in objects, some of which are simple and others of which can be very complex; and some of which are very common, and others of which are rarely used or are only used internally. (Since the Lisp allocation system, with its automatic reclamation of unused storage, is so much more convenient than @code{malloc()} and @code{free()}, the C code makes extensive use of it in its internal operations.) The basic Lisp objects are @table @code @item integer 31 bits of precision, or 63 bits on 64-bit machines; the reason for this is described below when the internal Lisp object representation is described. @item char An object representing a single character of text; chars behave like integers in many ways but are logically considered text rather than numbers and have a different read syntax. (the read syntax for a char contains the char itself or some textual encoding of it---for example, a Japanese Kanji character might be encoded as @samp{^[$(B#&^[(B} using the ISO-2022 encoding standard---rather than the numerical representation of the char; this way, if the mapping between chars and integers changes, which is quite possible for Kanji characters and other extended characters, the same character will still be created. Note that some primitives confuse chars and integers. The worst culprit is @code{eq}, which makes a special exception and considers a char to be @code{eq} to its integer equivalent, even though in no other case are objects of two different types @code{eq}. The reason for this monstrosity is compatibility with existing code; the separation of char from integer came fairly recently.) @item float Same precision as a double in C. @item bignum @itemx ratio @itemx bigfloat As build-time options, arbitrary-precision numbers are available. Bignums are integers, and when available they remove the restriction on buffer size. Ratios are non-integral rational numbers. Bigfloats are arbitrary-precision floating point numbers, with precision specified at runtime. @item symbol An object that contains Lisp objects and is referred to by name; symbols are used to implement variables and named functions and to provide the equivalent of preprocessor constants in C. @item string Self-explanatory; behaves much like a vector of chars but has a different read syntax and is stored and manipulated more compactly. @item bit-vector A vector of bits; similar to a string in spirit. @item vector A one-dimensional array of Lisp objects providing constant-time access to any of the objects; access to an arbitrary object in a vector is faster than for lists, but the operations that can be done on a vector are more limited. @item compiled-function An object containing compiled Lisp code, known as @dfn{byte code}. @item subr A Lisp primitive, i.e. a Lisp-callable function implemented in C. @item cons A simple container for two Lisp objects, used to implement lists and most other data structures in Lisp. @end table Objects which are not conses are called atoms. @cindex closure Note that there is no basic ``function'' type, as in more powerful versions of Lisp (where it's called a @dfn{closure}). XEmacs Lisp does not provide the closure semantics implemented by Common Lisp and Scheme. The guts of a function in XEmacs Lisp are represented in one of four ways: a symbol specifying another function (when one function is an alias for another), a list (whose first element must be the symbol @code{lambda}) containing the function's source code, a compiled-function object, or a subr object. (In other words, given a symbol specifying the name of a function, calling @code{symbol-function} to retrieve the contents of the symbol's function cell will return one of these types of objects.) XEmacs Lisp also contains numerous specialized objects used to implement the editor: @table @code @item buffer Stores text like a string, but is optimized for insertion and deletion and has certain other properties that can be set. @item frame An object with various properties whose displayable representation is a @dfn{window} in window-system parlance. @item window A section of a frame that displays the contents of a buffer; often called a @dfn{pane} in window-system parlance. @item window-configuration An object that represents a saved configuration of windows in a frame. @item device An object representing a screen on which frames can be displayed; equivalent to a @dfn{display} in the X Window System and a @dfn{TTY} in character mode. @item face An object specifying the appearance of text or graphics; it has properties such as font, foreground color, and background color. @item marker An object that refers to a particular position in a buffer and moves around as text is inserted and deleted to stay in the same relative position to the text around it. @item extent Similar to a marker but covers a range of text in a buffer; can also specify properties of the text, such as a face in which the text is to be displayed, whether the text is invisible or unmodifiable, etc. @item event Generated by calling @code{next-event} and contains information describing a particular event happening in the system, such as the user pressing a key or a process terminating. @item keymap An object that maps from events (described using lists, vectors, and symbols rather than with an event object because the mapping is for classes of events, rather than individual events) to functions to execute or other events to recursively look up; the functions are described by name, using a symbol, or using lists to specify the function's code. @item glyph An object that describes the appearance of an image (e.g. pixmap) on the screen; glyphs can be attached to the beginning or end of extents and in some future version of XEmacs will be able to be inserted directly into a buffer. @item process An object that describes a connection to an externally-running process. @end table There are some other, less-commonly-encountered general objects: @table @code @item hash-table An object that maps from an arbitrary Lisp object to another arbitrary Lisp object, using hashing for fast lookup. @item obarray A limited form of hash-table that maps from strings to symbols; obarrays are used to look up a symbol given its name and are not actually their own object type but are kludgily represented using vectors with hidden fields (this representation derives from GNU Emacs). @item specifier A complex object used to specify the value of a display property; a default value is given and different values can be specified for particular frames, buffers, windows, devices, or classes of device. @item char-table An object that maps from chars or classes of chars to arbitrary Lisp objects; internally char tables use a complex nested-vector representation that is optimized to the way characters are represented as integers. @item range-table An object that maps from ranges of integers to arbitrary Lisp objects. @end table And some strange special-purpose objects: @table @code @item charset @itemx coding-system Objects used when MULE, or multi-lingual/Asian-language, support is enabled. @item color-instance @itemx font-instance @itemx image-instance An object that encapsulates a window-system resource; instances are mostly used internally but are exposed on the Lisp level for cleanness of the specifier model and because it's occasionally useful for Lisp program to create or query the properties of instances. @item subwindow An object that encapsulate a @dfn{subwindow} resource, i.e. a window-system child window that is drawn into by an external process; this object should be integrated into the glyph system but isn't yet, and may change form when this is done. @item tooltalk-message @itemx tooltalk-pattern Objects that represent resources used in the ToolTalk interprocess communication protocol. @item toolbar-button An object used in conjunction with the toolbar. @end table And objects that are only used internally: @table @code @item opaque A generic object for encapsulating arbitrary memory; this allows you the generality of @code{malloc()} and the convenience of the Lisp object system. @item lstream A buffering I/O stream, used to provide a unified interface to anything that can accept output or provide input, such as a file descriptor, a stdio stream, a chunk of memory, a Lisp buffer, a Lisp string, etc.; it's a Lisp object to make its memory management more convenient. @item char-table-entry Subsidiary objects in the internal char-table representation. @item extent-auxiliary @itemx menubar-data @itemx toolbar-data Various special-purpose objects that are basically just used to encapsulate memory for particular subsystems, similar to the more general ``opaque'' object. @item symbol-value-forward @itemx symbol-value-buffer-local @itemx symbol-value-varalias @itemx symbol-value-lisp-magic Special internal-only objects that are placed in the value cell of a symbol to indicate that there is something special with this variable -- e.g. it has no value, it mirrors another variable, or it mirrors some C variable; there is really only one kind of object, called a @dfn{symbol-value-magic}, but it is sort-of halfway kludged into semi-different object types. @end table @cindex permanent objects @cindex temporary objects Some types of objects are @dfn{permanent}, meaning that once created, they do not disappear until explicitly destroyed, using a function such as @code{delete-buffer}, @code{delete-window}, @code{delete-frame}, etc. Others will disappear once they are not longer used, through the garbage collection mechanism. Buffers, frames, windows, devices, and processes are among the objects that are permanent. Note that some objects can go both ways: Faces can be created either way; extents are normally permanent, but detached extents (extents not referring to any text, as happens to some extents when the text they are referring to is deleted) are temporary. Note that some permanent objects, such as faces and coding systems, cannot be deleted. Note also that windows are unique in that they can be @emph{undeleted} after having previously been deleted. (This happens as a result of restoring a window configuration.) @cindex read syntax Many types of objects have a @dfn{read syntax}, i.e. a way of specifying an object of that type in Lisp code. When you load a Lisp file, or type in code to be evaluated, what really happens is that the function @code{read} is called, which reads some text and creates an object based on the syntax of that text; then @code{eval} is called, which possibly does something special; then this loop repeats until there's no more text to read. (@code{eval} only actually does something special with symbols, which causes the symbol's value to be returned, similar to referencing a variable; and with conses [i.e. lists], which cause a function invocation. All other values are returned unchanged.) The read syntax @example 17297 @end example converts to an integer whose value is 17297. @example 355/113 @end example converts to a ratio commonly used to approximate @emph{pi} when ratios are configured, and otherwise to a symbol whose name is ``355/113'' (for backward compatibility). @example 1.983e-4 @end example converts to a float whose value is 1.983e-4, or .0001983. @example ?b @end example converts to a char that represents the lowercase letter b. @example ?^[$(B#&^[(B @end example (where @samp{^[} actually is an @samp{ESC} character) converts to a particular Kanji character when using an ISO2022-based coding system for input. (To decode this goo: @samp{ESC} begins an escape sequence; @samp{ESC $ (} is a class of escape sequences meaning ``switch to a 94x94 character set''; @samp{ESC $ ( B} means ``switch to Japanese Kanji''; @samp{#} and @samp{&} collectively index into a 94-by-94 array of characters [subtract 33 from the ASCII value of each character to get the corresponding index]; @samp{ESC (} is a class of escape sequences meaning ``switch to a 94 character set''; @samp{ESC (B} means ``switch to US ASCII''. It is a coincidence that the letter @samp{B} is used to denote both Japanese Kanji and US ASCII. If the first @samp{B} were replaced with an @samp{A}, you'd be requesting a Chinese Hanzi character from the GB2312 character set.) @example "foobar" @end example converts to a string. @example foobar @end example converts to a symbol whose name is @code{"foobar"}. This is done by looking up the string equivalent in the global variable @code{obarray}, whose contents should be an obarray. If no symbol is found, a new symbol with the name @code{"foobar"} is automatically created and added to @code{obarray}; this process is called @dfn{interning} the symbol. @cindex interning @example (foo . bar) @end example converts to a cons cell containing the symbols @code{foo} and @code{bar}. @example (1 a 2.5) @end example converts to a three-element list containing the specified objects (note that a list is actually a set of nested conses; see the XEmacs Lisp Reference). @example [1 a 2.5] @end example converts to a three-element vector containing the specified objects. @example #[... ... ... ...] @end example converts to a compiled-function object (the actual contents are not shown since they are not relevant here; look at a file that ends with @file{.elc} for examples). @example #*01110110 @end example converts to a bit-vector. @example #s(hash-table ... ...) @end example converts to a hash table (the actual contents are not shown). @example #s(range-table ... ...) @end example converts to a range table (the actual contents are not shown). @example #s(char-table ... ...) @end example converts to a char table (the actual contents are not shown). Note that the @code{#s()} syntax is the general syntax for structures, which are not really implemented in XEmacs Lisp but should be. When an object is printed out (using @code{print} or a related function), the read syntax is used, so that the same object can be read in again. The other objects do not have read syntaxes, usually because it does not really make sense to create them in this fashion (i.e. processes, where it doesn't make sense to have a subprocess created as a side effect of reading some Lisp code), or because they can't be created at all (e.g. subrs). Permanent objects, as a rule, do not have a read syntax; nor do most complex objects, which contain too much state to be easily initialized through a read syntax. @node How Lisp Objects Are Represented in C, Allocation of Objects in XEmacs Lisp, The XEmacs Object System (Abstractly Speaking), Top @chapter How Lisp Objects Are Represented in C @cindex Lisp objects are represented in C, how @cindex objects are represented in C, how Lisp @cindex represented in C, how Lisp objects are Lisp objects are represented in C using a 32-bit or 64-bit machine word (depending on the processor; i.e. DEC Alphas use 64-bit Lisp objects and most other processors use 32-bit Lisp objects). The representation stuffs a pointer together with a tag, as follows: @example [ 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 ] [ 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 ] <---------------------------------------------------------> <-> a pointer to a structure, or an integer tag @end example A tag of 00 is used for all pointer object types, a tag of 10 is used for characters, and the other two tags 01 and 11 are joined together to form the integer object type. This representation gives us 31 bit integers and 30 bit characters, while pointers are represented directly without any bit masking or shifting. This representation, though, assumes that pointers to structs are always aligned to multiples of 4, so the lower 2 bits are always zero. Lisp objects use the typedef @code{Lisp_Object}, but the actual C type used for the Lisp object can vary. It can be either a simple type (@code{long} on the DEC Alpha, @code{int} on other machines) or a structure whose fields are bit fields that line up properly (actually, a union of structures is used). Generally the simple integral type is preferable because it ensures that the compiler will actually use a machine word to represent the object (some compilers will use more general and less efficient code for unions and structs even if they can fit in a machine word). The union type, however, has the advantage of stricter type checking. If you accidentally pass an integer where a Lisp object is desired, you get a compile error. The choice of which type to use is determined by the preprocessor constant @code{USE_UNION_TYPE} which is defined via the @code{--use-union-type} option to @code{configure}. Various macros are used to convert between Lisp_Objects and the corresponding C type. Macros of the form @code{XINT()}, @code{XCHAR()}, @code{XSTRING()}, @code{XSYMBOL()}, do any required bit shifting and/or masking and cast it to the appropriate type. @code{XINT()} needs to be a bit tricky so that negative numbers are properly sign-extended. Since integers are stored left-shifted, if the right-shift operator does an arithmetic shift (i.e. it leaves the most-significant bit as-is rather than shifting in a zero, so that it mimics a divide-by-two even for negative numbers) the shift to remove the tag bit is enough. This is the case on all the systems we support. Note that when @code{ERROR_CHECK_TYPECHECK} is defined, the converter macros become more complicated---they check the tag bits and/or the type field in the first four bytes of a record type to ensure that the object is really of the correct type. This is great for catching places where an incorrect type is being dereferenced---this typically results in a pointer being dereferenced as the wrong type of structure, with unpredictable (and sometimes not easily traceable) results. There are similar @code{XSET@var{TYPE}()} macros that construct a Lisp object. These macros are of the form @code{XSET@var{TYPE} (@var{lvalue}, @var{result})}, i.e. they have to be a statement rather than just used in an expression. The reason for this is that standard C doesn't let you ``construct'' a structure (but GCC does). Granted, this sometimes isn't too convenient; for the case of integers, at least, you can use the function @code{make_int()}, which constructs and @emph{returns} an integer Lisp object. Note that the @code{XSET@var{TYPE}()} macros are also affected by @code{ERROR_CHECK_TYPECHECK} and make sure that the structure is of the right type in the case of record types, where the type is contained in the structure. The C programmer is responsible for @strong{guaranteeing} that a Lisp_Object is the correct type before using the @code{X@var{TYPE}} macros. This is especially important in the case of lists. Use @code{XCAR} and @code{XCDR} if a Lisp_Object is certainly a cons cell, else use @code{Fcar()} and @code{Fcdr()}. Trust other C code, but not Lisp code. On the other hand, if XEmacs has an internal logic error, it's better to crash immediately, so sprinkle @code{assert()}s and ``unreachable'' @code{abort()}s liberally about the source code. Where performance is an issue, use @code{type_checking_assert}, @code{bufpos_checking_assert}, and @code{gc_checking_assert}, which do nothing unless the corresponding configure error checking flag was specified. @node Allocation of Objects in XEmacs Lisp, The Lisp Reader and Compiler, How Lisp Objects Are Represented in C, Top @chapter Allocation of Objects in XEmacs Lisp @cindex allocation of objects in XEmacs Lisp @cindex objects in XEmacs Lisp, allocation of @cindex Lisp objects, allocation of in XEmacs @menu * Introduction to Allocation:: * Garbage Collection:: * GCPROing:: * Garbage Collection - Step by Step:: * Integers and Characters:: * Allocation from Frob Blocks:: * lrecords:: * Low-level allocation:: * Cons:: * Vector:: * Bit Vector:: * Symbol:: * Marker:: * String:: * Compiled Function:: @end menu @node Introduction to Allocation, Garbage Collection, Allocation of Objects in XEmacs Lisp, Allocation of Objects in XEmacs Lisp @section Introduction to Allocation @cindex allocation, introduction to Emacs Lisp, like all Lisps, has garbage collection. This means that the programmer never has to explicitly free (destroy) an object; it happens automatically when the object becomes inaccessible. Most experts agree that garbage collection is a necessity in a modern, high-level language. Its omission from C stems from the fact that C was originally designed to be a nice abstract layer on top of assembly language, for writing kernels and basic system utilities rather than large applications. Lisp objects can be created by any of a number of Lisp primitives. Most object types have one or a small number of basic primitives for creating objects. For conses, the basic primitive is @code{cons}; for vectors, the primitives are @code{make-vector} and @code{vector}; for symbols, the primitives are @code{make-symbol} and @code{intern}; etc. Some Lisp objects, especially those that are primarily used internally, have no corresponding Lisp primitives. Every Lisp object, though, has at least one C primitive for creating it. Recall from section (VII) that a Lisp object, as stored in a 32-bit or 64-bit word, has a few tag bits, and a ``value'' that occupies the remainder of the bits. We can separate the different Lisp object types into three broad categories: @itemize @bullet @item (a) Those for whom the value directly represents the contents of the Lisp object. Only two types are in this category: integers and characters. No special allocation or garbage collection is necessary for such objects. Lisp objects of these types do not need to be @code{GCPRO}ed. @end itemize In the remaining two categories, the type is stored in the object itself. The tag for all such objects is the generic @dfn{lrecord} (Lisp_Type_Record) tag. The first bytes of the object's structure are an integer (actually a char) characterising the object's type and some flags, in particular the mark bit used for garbage collection. A structure describing the type is accessible thru the lrecord_implementation_table indexed with said integer. This structure includes the method pointers and a pointer to a string naming the type. @itemize @bullet @item (b) Those lrecords that are allocated in frob blocks (see above). This includes the objects that are most common and relatively small, and includes conses, strings, subrs, floats, compiled functions, symbols, extents, events, and markers. With the cleanup of frob blocks done in 19.12, it's not terribly hard to add more objects to this category, but it's a bit trickier than adding an object type to type (c) (esp. if the object needs a finalization method), and is not likely to save much space unless the object is small and there are many of them. (In fact, if there are very few of them, it might actually waste space.) @item (c) Those lrecords that are individually @code{malloc()}ed. These are called @dfn{lcrecords}. All other types are in this category. Adding a new type to this category is comparatively easy, and all types added since 19.8 (when the current allocation scheme was devised, by Richard Mlynarik), with the exception of the character type, have been in this category. @end itemize Note that bit vectors are a bit of a special case. They are simple lrecords as in category (b), but are individually @code{malloc()}ed like vectors. You can basically view them as exactly like vectors except that their type is stored in lrecord fashion rather than in directly-tagged fashion. @node Garbage Collection, GCPROing, Introduction to Allocation, Allocation of Objects in XEmacs Lisp @section Garbage Collection @cindex garbage collection @cindex mark and sweep Garbage collection is simple in theory but tricky to implement. Emacs Lisp uses the oldest garbage collection method, called @dfn{mark and sweep}. Garbage collection begins by starting with all accessible locations (i.e. all variables and other slots where Lisp objects might occur) and recursively traversing all objects accessible from those slots, marking each one that is found. We then go through all of memory and free each object that is not marked, and unmarking each object that is marked. Note that ``all of memory'' means all currently allocated objects. Traversing all these objects means traversing all frob blocks, all vectors (which are chained in one big list), and all lcrecords (which are likewise chained). Garbage collection can be invoked explicitly by calling @code{garbage-collect} but is also called automatically by @code{eval}, once a certain amount of memory has been allocated since the last garbage collection (according to @code{gc-cons-threshold}). @node GCPROing, Garbage Collection - Step by Step, Garbage Collection, Allocation of Objects in XEmacs Lisp @section @code{GCPRO}ing @cindex @code{GCPRO}ing @cindex garbage collection protection @cindex protection, garbage collection @code{GCPRO}ing is one of the ugliest and trickiest parts of Emacs internals. The basic idea is that whenever garbage collection occurs, all in-use objects must be reachable somehow or other from one of the roots of accessibility. The roots of accessibility are: @enumerate @item All objects that have been @code{staticpro()}d or @code{staticpro_nodump()}ed. This is used for any global C variables that hold Lisp objects. A call to @code{staticpro()} happens implicitly as a result of any symbols declared with @code{defsymbol()} and any variables declared with @code{DEFVAR_FOO()}. You need to explicitly call @code{staticpro()} (in the @code{vars_of_foo()} method of a module) for other global C variables holding Lisp objects. (This typically includes internal lists and such things.). Use @code{staticpro_nodump()} only in the rare cases when you do not want the pointed variable to be saved at dump time but rather recompute it at startup. Note that @code{obarray} is one of the @code{staticpro()}d things. Therefore, all functions and variables get marked through this. @item Any shadowed bindings that are sitting on the @code{specpdl} stack. @item Any objects sitting in currently active (Lisp) stack frames, catches, and condition cases. @item A couple of special-case places where active objects are located. @item Anything currently marked with @code{GCPRO}. @end enumerate Marking with @code{GCPRO} is necessary because some C functions (quite a lot, in fact), allocate objects during their operation. Quite frequently, there will be no other pointer to the object while the function is running, and if a garbage collection occurs and the object needs to be referenced again, bad things will happen. The solution is to mark those references with @code{GCPRO}. Note that it is a @emph{reference} that is marked with @code{GCPRO}, not an object. If you declare a @code{Lisp_Object} variable, assign to it, @code{GCPRO} it, and then assign to it again, the first object assigned @emph{is not} protected, while the second object @emph{is} protected. Unfortunately @code{GCPRO}ing is easy to forget, and there is basically no way around this problem. Here are some rules, though: @enumerate @item A garbage collection can occur whenever anything calls @code{Feval}, or whenever a @code{QUIT} can occur where execution can continue past this. (Remember, this is almost anywhere.) Note that @code{Fsignal} can GC, and it can return (even though it normally doesn't). This means that you must @code{GCPRO} before calling most of the error functions, including the @samp{CONCHECK} family of macros, if references occur after the call. @item You @emph{must} @code{UNGCPRO} anything that's @code{GCPRO}ed, and you @emph{must not} @code{UNGCPRO} if you haven't @code{GCPRO}ed. Getting either of these wrong will lead to crashes, often in completely random places unrelated to where the problem lies. There are some functions (@code{Fsignal} is the canonical example) which may or may not return. In these cases, the function is responsible for cleaning up the @code{GCPRO}s if it doesn't return, so you should treat it as an ordinary function. @item For every @code{GCPRO@var{n}}, there have to be declarations of @code{struct gcpro gcpro1, gcpro2, ..., gcpro@var{n}}. @item The way this actually works is that all currently active @code{GCPRO}s are chained through the @code{struct gcpro} local variables, with the variable @samp{gcprolist} pointing to the head of the list and the nth local @code{gcpro} variable pointing to the first @code{gcpro} variable in the next enclosing stack frame. Each @code{GCPRO}ed thing is an lvalue, and the @code{struct gcpro} local variable contains a pointer to this lvalue. This is why things will mess up badly if you don't pair up the @code{GCPRO}s and @code{UNGCPRO}s---you will end up with @code{gcprolist}s containing pointers to @code{struct gcpro}s or local @code{Lisp_Object} variables in no-longer-active stack frames. @item It is actually possible for a single @code{struct gcpro} to protect a contiguous array of any number of values, rather than just a single lvalue. To effect this, call @code{GCPRO@var{n}} as usual on the first object in the array and then set @code{gcpro@var{n}.nvars}. @item @strong{Strings are relocated.} What this means in practice is that the pointer obtained using @code{XSTRING_DATA()} is liable to change at any time, and you should never keep it around past any function call, or pass it as an argument to any function that might cause a garbage collection. This is why a number of functions accept either a ``non-relocatable'' @code{char *} pointer or a relocatable Lisp string, and only access the Lisp string's data at the very last minute. In some cases, you may end up having to @code{alloca()} some space and copy the string's data into it. @item By convention, if you have to nest @code{GCPRO}'s, use @code{NGCPRO@var{n}} (along with @code{struct gcpro ngcpro1, ngcpro2}, etc.), @code{NNGCPRO@var{n}}, etc. This avoids compiler warnings about shadowed locals. @item It is @emph{always} better to err on the side of extra @code{GCPRO}s rather than too few. The extra cycles spent on this are almost never going to make a whit of difference in the speed of anything. @item The general rule to follow is that caller, not callee, @code{GCPRO}s. That is, you should not have to explicitly @code{GCPRO} any Lisp objects that are passed in as parameters. One exception from this rule is if you ever plan to change the parameter value, and store a new object in it. In that case, you @emph{must} @code{GCPRO} the parameter, because otherwise the new object will not be protected. So, if you create any Lisp objects (remember, this happens in all sorts of circumstances, e.g. with @code{Fcons()}, etc.), you are responsible for @code{GCPRO}ing them, unless you are @emph{absolutely sure} that there's no possibility that a garbage-collection can occur while you need to use the object. Even then, consider @code{GCPRO}ing. @item If you have the @emph{least smidgeon of doubt} about whether you need to @code{GCPRO}, you should @code{GCPRO}. @item Beware of @code{GCPRO}ing something that is uninitialized. If you have any shade of doubt about this, initialize all your variables to @code{Qnil}. @item Be careful of traps, like calling @code{Fcons()} in the argument to another function. By the ``caller protects'' law, you should be @code{GCPRO}ing the newly-created cons, but you aren't. A certain number of functions that are commonly called on freshly created stuff (e.g. @code{nconc2()}, @code{Fsignal()}), break the ``caller protects'' law and go ahead and @code{GCPRO} their arguments so as to simplify things, but make sure and check if it's OK whenever doing something like this. @item Once again, remember to @code{GCPRO}! Bugs resulting from insufficient @code{GCPRO}ing are intermittent and extremely difficult to track down, often showing up in crashes inside of @code{garbage-collect} or in weirdly corrupted objects or even in incorrect values in a totally different section of code. @end enumerate If you don't understand whether to @code{GCPRO} in a particular instance, ask on the mailing lists. A general hint is that @code{prog1} is the canonical example. @cindex garbage collection, conservative @cindex conservative garbage collection Given the extremely error-prone nature of the @code{GCPRO} scheme, and the difficulties in tracking down, it should be considered a deficiency in the XEmacs code. A solution to this problem would involve implementing so-called @dfn{conservative} garbage collection for the C stack. That involves looking through all of stack memory and treating anything that looks like a reference to an object as a reference. This will result in a few objects not getting collected when they should, but it obviates the need for @code{GCPRO}ing, and allows garbage collection to happen at any point at all, such as during object allocation. @node Garbage Collection - Step by Step, Integers and Characters, GCPROing, Allocation of Objects in XEmacs Lisp @section Garbage Collection - Step by Step @cindex garbage collection - step by step @menu * Invocation:: * garbage_collect_1:: * mark_object:: * gc_sweep:: * sweep_lcrecords_1:: * compact_string_chars:: * sweep_strings:: * sweep_bit_vectors_1:: @end menu @node Invocation, garbage_collect_1, Garbage Collection - Step by Step, Garbage Collection - Step by Step @subsection Invocation @cindex garbage collection, invocation The first thing that anyone should know about garbage collection is: when and how the garbage collector is invoked. One might think that this could happen every time new memory is allocated, e.g. new objects are created, but this is @emph{not} the case. Instead, we have the following situation: The entry point of any process of garbage collection is an invocation of the function @code{garbage_collect_1} in file @code{alloc.c}. The invocation can occur @emph{explicitly} by calling the function @code{Fgarbage_collect} (in addition this function provides information about the freed memory), or can occur @emph{implicitly} in four different situations: @enumerate @item In function @code{main_1} in file @code{emacs.c}. This function is called at each startup of xemacs. The garbage collection is invoked after all initial creations are completed, but only if a special internal error checking-constant @code{ERROR_CHECK_GC} is defined. @item In function @code{disksave_object_finalization} in file @code{alloc.c}. The only purpose of this function is to clear the objects from memory which need not be stored with xemacs when we dump out an executable. This is only done by @code{Fdump_emacs} or by @code{Fdump_emacs_data} respectively (both in @code{emacs.c}). The actual clearing is accomplished by making these objects unreachable and starting a garbage collection. The function is only used while building xemacs. @item In function @code{Feval / eval} in file @code{eval.c}. Each time the well known and often used function eval is called to evaluate a form, one of the first things that could happen, is a potential call of @code{garbage_collect_1}. There exist three global variables, @code{consing_since_gc} (counts the created cons-cells since the last garbage collection), @code{gc_cons_threshold} (a specified threshold after which a garbage collection occurs) and @code{always_gc}. If @code{always_gc} is set or if the threshold is exceeded, the garbage collection will start. @item In function @code{Ffuncall / funcall} in file @code{eval.c}. This function evaluates calls of elisp functions and works according to @code{Feval}. @end enumerate The upshot is that garbage collection can basically occur everywhere @code{Feval}, respectively @code{Ffuncall}, is used - either directly or through another function. Since calls to these two functions are hidden in various other functions, many calls to @code{garbage_collect_1} are not obviously foreseeable, and therefore unexpected. Instances where they are used that are worth remembering are various elisp commands, as for example @code{or}, @code{and}, @code{if}, @code{cond}, @code{while}, @code{setq}, etc., miscellaneous @code{gui_item_...} functions, everything related to @code{eval} (@code{Feval_buffer}, @code{call0}, ...) and inside @code{Fsignal}. The latter is used to handle signals, as for example the ones raised by every @code{QUIT}-macro triggered after pressing Ctrl-g. @node garbage_collect_1, mark_object, Invocation, Garbage Collection - Step by Step @subsection @code{garbage_collect_1} @cindex @code{garbage_collect_1} We can now describe exactly what happens after the invocation takes place. @enumerate @item There are several cases in which the garbage collector is left immediately: when we are already garbage collecting (@code{gc_in_progress}), when the garbage collection is somehow forbidden (@code{gc_currently_forbidden}), when we are currently displaying something (@code{in_display}) or when we are preparing for the armageddon of the whole system (@code{preparing_for_armageddon}). @item Next the correct frame in which to put all the output occurring during garbage collecting is determined. In order to be able to restore the old display's state after displaying the message, some data about the current cursor position has to be saved. The variables @code{pre_gc_cursor} and @code{cursor_changed} take care of that. @item The state of @code{gc_currently_forbidden} must be restored after the garbage collection, no matter what happens during the process. We accomplish this by @code{record_unwind_protect}ing the suitable function @code{restore_gc_inhibit} together with the current value of @code{gc_currently_forbidden}. @item If we are concurrently running an interactive xemacs session, the next step is simply to show the garbage collector's cursor/message. @item The following steps are the intrinsic steps of the garbage collector, therefore @code{gc_in_progress} is set. @item For debugging purposes, it is possible to copy the current C stack frame. However, this seems to be a currently unused feature. @item Before actually starting to go over all live objects, references to objects that are no longer used are pruned. We only have to do this for events (@code{clear_event_resource}) and for specifiers (@code{cleanup_specifiers}). @item Now the mark phase begins and marks all accessible elements. In order to start from all slots that serve as roots of accessibility, the function @code{mark_object} is called for each root individually to go out from there to mark all reachable objects. All roots that are traversed are shown in their processed order: @itemize @bullet @item all constant symbols and static variables that are registered via @code{staticpro}@ in the dynarr @code{staticpros}. @xref{Adding Global Lisp Variables}. @item all Lisp objects that are created in C functions and that must be protected from freeing them. They are registered in the global list @code{gcprolist}. @xref{GCPROing}. @item all local variables (i.e. their name fields @code{symbol} and old values @code{old_values}) that are bound during the evaluation by the Lisp engine. They are stored in @code{specbinding} structs pushed on a stack called @code{specpdl}. @xref{Dynamic Binding; The specbinding Stack; Unwind-Protects}. @item all catch blocks that the Lisp engine encounters during the evaluation cause the creation of structs @code{catchtag} inserted in the list @code{catchlist}. Their tag (@code{tag}) and value (@code{val} fields are freshly created objects and therefore have to be marked. @xref{Catch and Throw}. @item every function application pushes new structs @code{backtrace} on the call stack of the Lisp engine (@code{backtrace_list}). The unique parts that have to be marked are the fields for each function (@code{function}) and all their arguments (@code{args}). @xref{Evaluation}. @item all objects that are used by the redisplay engine that must not be freed are marked by a special function called @code{mark_redisplay} (in @code{redisplay.c}). @item all objects created for profiling purposes are allocated by C functions instead of using the lisp allocation mechanisms. In order to receive the right ones during the sweep phase, they also have to be marked manually. That is done by the function @code{mark_profiling_info} @end itemize @item Hash tables in XEmacs belong to a kind of special objects that make use of a concept often called 'weak pointers'. To make a long story short, these kind of pointers are not followed during the estimation of the live objects during garbage collection. Any object referenced only by weak pointers is collected anyway, and the reference to it is cleared. In hash tables there are different usage patterns of them, manifesting in different types of hash tables, namely 'non-weak', 'weak', 'key-weak' and 'value-weak' (internally also 'key-car-weak' and 'value-car-weak') hash tables, each clearing entries depending on different conditions. More information can be found in the documentation to the function @code{make-hash-table}. Because there are complicated dependency rules about when and what to mark while processing weak hash tables, the standard @code{marker} method is only active if it is marking non-weak hash tables. As soon as a weak component is in the table, the hash table entries are ignored while marking. Instead their marking is done each separately by the function @code{finish_marking_weak_hash_tables}. This function iterates over each hash table entry @code{hentries} for each weak hash table in @code{Vall_weak_hash_tables}. Depending on the type of a table, the appropriate action is performed. If a table is acting as @code{HASH_TABLE_KEY_WEAK}, and a key already marked, everything reachable from the @code{value} component is marked. If it is acting as a @code{HASH_TABLE_VALUE_WEAK} and the value component is already marked, the marking starts beginning only from the @code{key} component. If it is a @code{HASH_TABLE_KEY_CAR_WEAK} and the car of the key entry is already marked, we mark both the @code{key} and @code{value} components. Finally, if the table is of the type @code{HASH_TABLE_VALUE_CAR_WEAK} and the car of the value components is already marked, again both the @code{key} and the @code{value} components get marked. Again, there are lists with comparable properties called weak lists. There exist different peculiarities of their types called @code{simple}, @code{assoc}, @code{key-assoc} and @code{value-assoc}. You can find further details about them in the description to the function @code{make-weak-list}. The scheme of their marking is similar: all weak lists are listed in @code{Qall_weak_lists}, therefore we iterate over them. The marking is advanced until we hit an already marked pair. Then we know that during a former run all the rest has been marked completely. Again, depending on the special type of the weak list, our jobs differ. If it is a @code{WEAK_LIST_SIMPLE} and the elem is marked, we mark the @code{cons} part. If it is a @code{WEAK_LIST_ASSOC} and not a pair or a pair with both marked car and cdr, we mark the @code{cons} and the @code{elem}. If it is a @code{WEAK_LIST_KEY_ASSOC} and not a pair or a pair with a marked car of the elem, we mark the @code{cons} and the @code{elem}. Finally, if it is a @code{WEAK_LIST_VALUE_ASSOC} and not a pair or a pair with a marked cdr of the elem, we mark both the @code{cons} and the @code{elem}. Since, by marking objects in reach from weak hash tables and weak lists, other objects could get marked, this perhaps implies further marking of other weak objects, both finishing functions are redone as long as yet unmarked objects get freshly marked. @item After completing the special marking for the weak hash tables and for the weak lists, all entries that point to objects that are going to be swept in the further process are useless, and therefore have to be removed from the table or the list. The function @code{prune_weak_hash_tables} does the job for weak hash tables. Totally unmarked hash tables are removed from the list @code{Vall_weak_hash_tables}. The other ones are treated more carefully by scanning over all entries and removing one as soon as one of the components @code{key} and @code{value} is unmarked. The same idea applies to the weak lists. It is accomplished by @code{prune_weak_lists}: An unmarked list is pruned from @code{Vall_weak_lists} immediately. A marked list is treated more carefully by going over it and removing just the unmarked pairs. @item The function @code{prune_specifiers} checks all listed specifiers held in @code{Vall_specifiers} and removes the ones from the lists that are unmarked. @item All syntax tables are stored in a list called @code{Vall_syntax_tables}. The function @code{prune_syntax_tables} walks through it and unlinks the tables that are unmarked. @item Next, we will attack the complete sweeping - the function @code{gc_sweep} which holds the predominance. @item First, all the variables with respect to garbage collection are reset. @code{consing_since_gc} - the counter of the created cells since the last garbage collection - is set back to 0, and @code{gc_in_progress} is not @code{true} anymore. @item In case the session is interactive, the displayed cursor and message are removed again. @item The state of @code{gc_inhibit} is restored to the former value by unwinding the stack. @item A small memory reserve is always held back that can be reached by @code{breathing_space}. If nothing more is left, we create a new reserve and exit. @end enumerate @node mark_object, gc_sweep, garbage_collect_1, Garbage Collection - Step by Step @subsection @code{mark_object} @cindex @code{mark_object} The first thing that is checked while marking an object is whether the object is a real Lisp object @code{Lisp_Type_Record} or just an integer or a character. Integers and characters are the only two types that are stored directly - without another level of indirection, and therefore they don't have to be marked and collected. @xref{How Lisp Objects Are Represented in C}. The second case is the one we have to handle. It is the one when we are dealing with a pointer to a Lisp object. But, there exist also three possibilities, that prevent us from doing anything while marking: The object is read only which prevents it from being garbage collected, i.e. marked (@code{C_READONLY_RECORD_HEADER}). The object in question is already marked, and need not be marked for the second time (checked by @code{MARKED_RECORD_HEADER_P}). If it is a special, unmarkable object (@code{UNMARKABLE_RECORD_HEADER_P}, apparently, these are objects that sit in some const space, and can therefore not be marked, see @code{this_one_is_unmarkable} in @code{alloc.c}). Now, the actual marking is feasible. We do so by once using the macro @code{MARK_RECORD_HEADER} to mark the object itself (actually the special flag in the lrecord header), and calling its special marker "method" @code{marker} if available. The marker method marks every other object that is in reach from our current object. Note, that these marker methods should not call @code{mark_object} recursively, but instead should return the next object from where further marking has to be performed. In case another object was returned, as mentioned before, we reiterate the whole @code{mark_object} process beginning with this next object. @node gc_sweep, sweep_lcrecords_1, mark_object, Garbage Collection - Step by Step @subsection @code{gc_sweep} @cindex @code{gc_sweep} The job of this function is to free all unmarked records from memory. As we know, there are different types of objects implemented and managed, and consequently different ways to free them from memory. @xref{Introduction to Allocation}. We start with all objects stored through @code{lcrecords}. All bulkier objects are allocated and handled using that scheme of @code{lcrecords}. Each object is @code{malloc}ed separately instead of placing it in one of the contiguous frob blocks. All types that are currently stored using @code{lcrecords}'s @code{alloc_lcrecord} and @code{make_lcrecord_list} are the types: vectors, buffers, char-table, char-table-entry, console, weak-list, database, device, ldap, hash-table, command-builder, extent-auxiliary, extent-info, face, coding-system, frame, image-instance, glyph, popup-data, gui-item, keymap, charset, color_instance, font_instance, opaque, opaque-list, process, range-table, specifier, symbol-value-buffer-local, symbol-value-lisp-magic, symbol-value-varalias, toolbar-button, tooltalk-message, tooltalk-pattern, window, and window-configuration. We take care of them in the fist place in order to be able to handle and to finalize items stored in them more easily. The function @code{sweep_lcrecords_1} as described below is doing the whole job for us. For a description about the internals: @xref{lrecords}. Our next candidates are the other objects that behave quite differently than everything else: the strings. They consists of two parts, a fixed-size portion (@code{struct Lisp_String}) holding the string's length, its property list and a pointer to the second part, and the actual string data, which is stored in string-chars blocks comparable to frob blocks. In this block, the data is not only freed, but also a compression of holes is made, i.e. all strings are relocated together. @xref{String}. This compacting phase is performed by the function @code{compact_string_chars}, the actual sweeping by the function @code{sweep_strings} is described below. After that, the other types are swept step by step using functions @code{sweep_conses}, @code{sweep_bit_vectors_1}, @code{sweep_compiled_functions}, @code{sweep_floats}, @code{sweep_symbols}, @code{sweep_extents}, @code{sweep_markers} and @code{sweep_extents}. They are the fixed-size types cons, floats, compiled-functions, symbol, marker, extent, and event stored in so-called "frob blocks", and therefore we can basically do the same on every type objects, using the same macros, especially defined only to handle everything with respect to fixed-size blocks. The only fixed-size type that is not handled here are the fixed-size portion of strings, because we took special care of them earlier. The only big exceptions are bit vectors stored differently and therefore treated differently by the function @code{sweep_bit_vectors_1} described later. At first, we need some brief information about how these fixed-size types are managed in general, in order to understand how the sweeping is done. They have all a fixed size, and are therefore stored in big blocks of memory - allocated at once - that can hold a certain amount of objects of one type. The macro @code{DECLARE_FIXED_TYPE_ALLOC} creates the suitable structures for every type. More precisely, we have the block struct (holding a pointer to the previous block @code{prev} and the objects in @code{block[]}), a pointer to current block (@code{current_..._block)}) and its last index (@code{current_..._block_index}), and a pointer to the free list that will be created. Also a macro @code{FIXED_TYPE_FROM_BLOCK} plus some related macros exists that are used to obtain a new object, either from the free list @code{ALLOCATE_FIXED_TYPE_1} if there is an unused object of that type stored or by allocating a completely new block using @code{ALLOCATE_FIXED_TYPE_FROM_BLOCK}. The rest works as follows: all of them define a macro @code{UNMARK_...} that is used to unmark the object. They define a macro @code{ADDITIONAL_FREE_...} that defines additional work that has to be done when converting an object from in use to not in use (so far, only markers use it in order to unchain them). Then, they all call the macro @code{SWEEP_FIXED_TYPE_BLOCK} instantiated with their type name and their struct name. This call in particular does the following: we go over all blocks starting with the current moving towards the oldest. For each block, we look at every object in it. If the object already freed (checked with @code{FREE_STRUCT_P} using the first pointer of the object), or if it is set to read only (@code{C_READONLY_RECORD_HEADER_P}, nothing must be done. If it is unmarked (checked with @code{MARKED_RECORD_HEADER_P}), it is put in the free list and set free (using the macro @code{FREE_FIXED_TYPE}, otherwise it stays in the block, but is unmarked (by @code{UNMARK_...}). While going through one block, we note if the whole block is empty. If so, the whole block is freed (using @code{xfree}) and the free list state is set to the state it had before handling this block. @node sweep_lcrecords_1, compact_string_chars, gc_sweep, Garbage Collection - Step by Step @subsection @code{sweep_lcrecords_1} @cindex @code{sweep_lcrecords_1} After nullifying the complete lcrecord statistics, we go over all lcrecords two separate times. They are all chained together in a list with a head called @code{all_lcrecords}. The first loop calls for each object its @code{finalizer} method, but only in the case that it is not read only (@code{C_READONLY_RECORD_HEADER_P)}, it is not already marked (@code{MARKED_RECORD_HEADER_P}), it is not already in a free list (list of freed objects, field @code{free}) and finally it owns a finalizer method. The second loop actually frees the appropriate objects again by iterating through the whole list. In case an object is read only or marked, it has to persist, otherwise it is manually freed by calling @code{xfree}. During this loop, the lcrecord statistics are kept up to date by calling @code{tick_lcrecord_stats} with the right arguments, @node compact_string_chars, sweep_strings, sweep_lcrecords_1, Garbage Collection - Step by Step @subsection @code{compact_string_chars} @cindex @code{compact_string_chars} The purpose of this function is to compact all the data parts of the strings that are held in so-called @code{string_chars_block}, i.e. the strings that do not exceed a certain maximal length. The procedure with which this is done is as follows. We are keeping two positions in the @code{string_chars_block}s using two pointer/integer pairs, namely @code{from_sb}/@code{from_pos} and @code{to_sb}/@code{to_pos}. They stand for the actual positions, from where to where, to copy the actually handled string. While going over all chained @code{string_char_block}s and their held strings, staring at @code{first_string_chars_block}, both pointers are advanced and eventually a string is copied from @code{from_sb} to @code{to_sb}, depending on the status of the pointed at strings. More precisely, we can distinguish between the following actions. @itemize @bullet @item The string at @code{from_sb}'s position could be marked as free, which is indicated by an invalid pointer to the pointer that should point back to the fixed size string object, and which is checked by @code{FREE_STRUCT_P}. In this case, the @code{from_sb}/@code{from_pos} is advanced to the next string, and nothing has to be copied. @item Also, if a string object itself is unmarked, nothing has to be copied. We likewise advance the @code{from_sb}/@code{from_pos} pair as described above. @item In all other cases, we have a marked string at hand. The string data must be moved from the from-position to the to-position. In case there is not enough space in the actual @code{to_sb}-block, we advance this pointer to the beginning of the next block before copying. In case the from and to positions are different, we perform the actual copying using the library function @code{memmove}. @end itemize After compacting, the pointer to the current @code{string_chars_block}, sitting in @code{current_string_chars_block}, is reset on the last block to which we moved a string, i.e. @code{to_block}, and all remaining blocks (we know that they just carry garbage) are explicitly @code{xfree}d. @node sweep_strings, sweep_bit_vectors_1, compact_string_chars, Garbage Collection - Step by Step @subsection @code{sweep_strings} @cindex @code{sweep_strings} The sweeping for the fixed sized string objects is essentially exactly the same as it is for all other fixed size types. As before, the freeing into the suitable free list is done by using the macro @code{SWEEP_FIXED_SIZE_BLOCK} after defining the right macros @code{UNMARK_string} and @code{ADDITIONAL_FREE_string}. These two definitions are a little bit special compared to the ones used for the other fixed size types. @code{UNMARK_string} is defined the same way except some additional code used for updating the bookkeeping information. For strings, @code{ADDITIONAL_FREE_string} has to do something in addition: in case, the string was not allocated in a @code{string_chars_block} because it exceeded the maximal length, and therefore it was @code{malloc}ed separately, we know also @code{xfree} it explicitly. @node sweep_bit_vectors_1, , sweep_strings, Garbage Collection - Step by Step @subsection @code{sweep_bit_vectors_1} @cindex @code{sweep_bit_vectors_1} Bit vectors are also one of the rare types that are @code{malloc}ed individually. Consequently, while sweeping, all further needless bit vectors must be freed by hand. This is done, as one might imagine, the expected way: since they are all registered in a list called @code{all_bit_vectors}, all elements of that list are traversed, all unmarked bit vectors are unlinked by calling @code{xfree} and all of them become unmarked. In addition, the bookkeeping information used for garbage collector's output purposes is updated. @node Integers and Characters, Allocation from Frob Blocks, Garbage Collection - Step by Step, Allocation of Objects in XEmacs Lisp @section Integers and Characters @cindex integers and characters @cindex characters, integers and Integer and character Lisp objects are created from integers using the macros @code{XSETINT()} and @code{XSETCHAR()} or the equivalent functions @code{make_int()} and @code{make_char()}. (These are actually macros on most systems.) These functions basically just do some moving of bits around, since the integral value of the object is stored directly in the @code{Lisp_Object}. @code{XSETINT()} and the like will truncate values given to them that are too big; i.e. you won't get the value you expected but the tag bits will at least be correct. @node Allocation from Frob Blocks, lrecords, Integers and Characters, Allocation of Objects in XEmacs Lisp @section Allocation from Frob Blocks @cindex allocation from frob blocks @cindex frob blocks, allocation from The uninitialized memory required by a @code{Lisp_Object} of a particular type is allocated using @code{ALLOCATE_FIXED_TYPE()}. This only occurs inside of the lowest-level object-creating functions in @file{alloc.c}: @code{Fcons()}, @code{make_float()}, @code{Fmake_byte_code()}, @code{Fmake_symbol()}, @code{allocate_extent()}, @code{allocate_event()}, @code{Fmake_marker()}, and @code{make_uninit_string()}. The idea is that, for each type, there are a number of frob blocks (each 2K in size); each frob block is divided up into object-sized chunks. Each frob block will have some of these chunks that are currently assigned to objects, and perhaps some that are free. (If a frob block has nothing but free chunks, it is freed at the end of the garbage collection cycle.) The free chunks are stored in a free list, which is chained by storing a pointer in the first four bytes of the chunk. (Except for the free chunks at the end of the last frob block, which are handled using an index which points past the end of the last-allocated chunk in the last frob block.) @code{ALLOCATE_FIXED_TYPE()} first tries to retrieve a chunk from the free list; if that fails, it calls @code{ALLOCATE_FIXED_TYPE_FROM_BLOCK()}, which looks at the end of the last frob block for space, and creates a new frob block if there is none. (There are actually two versions of these macros, one of which is more defensive but less efficient and is used for error-checking.) @node lrecords, Low-level allocation, Allocation from Frob Blocks, Allocation of Objects in XEmacs Lisp @section lrecords @cindex lrecords [see @file{lrecord.h}] All lrecords have at the beginning of their structure a @code{struct lrecord_header}. This just contains a type number and some flags, including the mark bit. All builtin type numbers are defined as constants in @code{enum lrecord_type}, to allow the compiler to generate more efficient code for @code{@var{type}P}. The type number, thru the @code{lrecord_implementation_table}, gives access to a @code{struct lrecord_implementation}, which is a structure containing method pointers and such. There is one of these for each type, and it is a global, constant, statically-declared structure that is declared in the @code{DEFINE_LRECORD_IMPLEMENTATION()} macro. Simple lrecords (of type (b) above) just have a @code{struct lrecord_header} at their beginning. lcrecords, however, actually have a @code{struct lcrecord_header}. This, in turn, has a @code{struct lrecord_header} at its beginning, so sanity is preserved; but it also has a pointer used to chain all lcrecords together, and a special ID field used to distinguish one lcrecord from another. (This field is used only for debugging and could be removed, but the space gain is not significant.) Simple lrecords are created using @code{ALLOCATE_FIXED_TYPE()}, just like for other frob blocks. The only change is that the implementation pointer must be initialized correctly. (The implementation structure for an lrecord, or rather the pointer to it, is named @code{lrecord_float}, @code{lrecord_extent}, @code{lrecord_buffer}, etc.) lcrecords are created using @code{alloc_lcrecord()}. This takes a size to allocate and an implementation pointer. (The size needs to be passed because some lcrecords, such as window configurations, are of variable size.) This basically just @code{malloc()}s the storage, initializes the @code{struct lcrecord_header}, and chains the lcrecord onto the head of the list of all lcrecords, which is stored in the variable @code{all_lcrecords}. The calls to @code{alloc_lcrecord()} generally occur in the lowest-level allocation function for each lrecord type. Whenever you create an lrecord, you need to call either @code{DEFINE_LRECORD_IMPLEMENTATION()} or @code{DEFINE_LRECORD_SEQUENCE_IMPLEMENTATION()}. This needs to be specified in a @file{.c} file, at the top level. What this actually does is define and initialize the implementation structure for the lrecord. (And possibly declares a function @code{error_check_foo()} that implements the @code{XFOO()} macro when error-checking is enabled.) The arguments to the macros are the actual type name (this is used to construct the C variable name of the lrecord implementation structure and related structures using the @samp{##} macro concatenation operator), a string that names the type on the Lisp level (this may not be the same as the C type name; typically, the C type name has underscores, while the Lisp string has dashes), various method pointers, and the name of the C structure that contains the object. The methods are used to encapsulate type-specific information about the object, such as how to print it or mark it for garbage collection, so that it's easy to add new object types without having to add a specific case for each new type in a bunch of different places. The difference between @code{DEFINE_LRECORD_IMPLEMENTATION()} and @code{DEFINE_LRECORD_SEQUENCE_IMPLEMENTATION()} is that the former is used for fixed-size object types and the latter is for variable-size object types. Most object types are fixed-size; some complex types, however (e.g. window configurations), are variable-size. Variable-size object types have an extra method, which is called to determine the actual size of a particular object of that type. (Currently this is only used for keeping allocation statistics.) For the purpose of keeping allocation statistics, the allocation engine keeps a list of all the different types that exist. Note that, since @code{DEFINE_LRECORD_IMPLEMENTATION()} is a macro that is specified at top-level, there is no way for it to initialize the global data structures containing type information, like @code{lrecord_implementations_table}. For this reason a call to @code{INIT_LRECORD_IMPLEMENTATION} must be added to the same source file containing @code{DEFINE_LRECORD_IMPLEMENTATION}, but instead of to the top level, to one of the init functions, typically @code{syms_of_@var{foo}.c}. @code{INIT_LRECORD_IMPLEMENTATION} must be called before an object of this type is used. The type number is also used to index into an array holding the number of objects of each type and the total memory allocated for objects of that type. The statistics in this array are computed during the sweep stage. These statistics are returned by the call to @code{garbage-collect}. Note that for every type defined with a @code{DEFINE_LRECORD_*()} macro, there needs to be a @code{DECLARE_LRECORD_IMPLEMENTATION()} somewhere in a @file{.h} file, and this @file{.h} file needs to be included by @file{inline.c}. Furthermore, there should generally be a set of @code{XFOOBAR()}, @code{FOOBARP()}, etc. macros in a @file{.h} (or occasionally @file{.c}) file. To create one of these, copy an existing model and modify as necessary. @strong{Please note:} If you define an lrecord in an external dynamically-loaded module, you must use @code{DECLARE_EXTERNAL_LRECORD}, @code{DEFINE_EXTERNAL_LRECORD_IMPLEMENTATION}, and @code{DEFINE_EXTERNAL_LRECORD_SEQUENCE_IMPLEMENTATION} instead of the non-EXTERNAL forms. These macros will dynamically add new type numbers to the global enum that records them, whereas the non-EXTERNAL forms assume that the programmer has already inserted the correct type numbers into the enum's code at compile-time. The various methods in the lrecord implementation structure are: @enumerate @item @cindex mark method A @dfn{mark} method. This is called during the marking stage and passed a function pointer (usually the @code{mark_object()} function), which is used to mark an object. All Lisp objects that are contained within the object need to be marked by applying this function to them. The mark method should also return a Lisp object, which should be either @code{nil} or an object to mark. (This can be used in lieu of calling @code{mark_object()} on the object, to reduce the recursion depth, and consequently should be the most heavily nested sub-object, such as a long list.) @strong{Please note:} When the mark method is called, garbage collection is in progress, and special precautions need to be taken when accessing objects; see section (B) above. If your mark method does not need to do anything, it can be @code{NULL}. @item A @dfn{print} method. This is called to create a printed representation of the object, whenever @code{princ}, @code{prin1}, or the like is called. It is passed the object, a stream to which the output is to be directed, and an @code{escapeflag} which indicates whether the object's printed representation should be @dfn{escaped} so that it is readable. (This corresponds to the difference between @code{princ} and @code{prin1}.) Basically, @dfn{escaped} means that strings will have quotes around them and confusing characters in the strings such as quotes, backslashes, and newlines will be backslashed; and that special care will be taken to make symbols print in a readable fashion (e.g. symbols that look like numbers will be backslashed). Other readable objects should perhaps pass @code{escapeflag} on when sub-objects are printed, so that readability is preserved when necessary (or if not, always pass in a 1 for @code{escapeflag}). Non-readable objects should in general ignore @code{escapeflag}, except that some use it as an indication that more verbose output should be given. Sub-objects are printed using @code{print_internal()}, which takes exactly the same arguments as are passed to the print method. Literal C strings should be printed using @code{write_c_string()}, or @code{write_string_1()} for non-null-terminated strings. Functions that do not have a readable representation should check the @code{print_readably} flag and signal an error if it is set. If you specify NULL for the print method, the @code{default_object_printer()} will be used. @item A @dfn{finalize} method. This is called at the beginning of the sweep stage on lcrecords that are about to be freed, and should be used to perform any extra object cleanup. This typically involves freeing any extra @code{malloc()}ed memory associated with the object, releasing any operating-system and window-system resources associated with the object (e.g. pixmaps, fonts), etc. The finalize method can be NULL if nothing needs to be done. WARNING #1: The finalize method is also called at the end of the dump phase; this time with the for_disksave parameter set to non-zero. The object is @emph{not} about to disappear, so you have to make sure to @emph{not} free any extra @code{malloc()}ed memory if you're going to need it later. (Also, signal an error if there are any operating-system and window-system resources here, because they can't be dumped.) Finalize methods should, as a rule, set to zero any pointers after they've been freed, and check to make sure pointers are not zero before freeing. Although I'm pretty sure that finalize methods are not called twice on the same object (except for the @code{for_disksave} proviso), we've gotten nastily burned in some cases by not doing this. WARNING #2: The finalize method is @emph{only} called for lcrecords, @emph{not} for simply lrecords. If you need a finalize method for simple lrecords, you have to stick it in the @code{ADDITIONAL_FREE_foo()} macro in @file{alloc.c}. WARNING #3: Things are in an @emph{extremely} bizarre state when @code{ADDITIONAL_FREE_foo()} is called, so you have to be incredibly careful when writing one of these functions. See the comment in @code{gc_sweep()}. If you ever have to add one of these, consider using an lcrecord or dealing with the problem in a different fashion. @item An @dfn{equal} method. This compares the two objects for similarity, when @code{equal} is called. It should compare the contents of the objects in some reasonable fashion. It is passed the two objects and a @dfn{depth} value, which is used to catch circular objects. To compare sub-Lisp-objects, call @code{internal_equal()} and bump the depth value by one. If this value gets too high, a @code{circular-object} error will be signaled. If this is NULL, objects are @code{equal} only when they are @code{eq}, i.e. identical. @item A @dfn{hash} method. This is used to hash objects when they are to be compared with @code{equal}. The rule here is that if two objects are @code{equal}, they @emph{must} hash to the same value; i.e. your hash function should use some subset of the sub-fields of the object that are compared in the ``equal'' method. If you specify this method as @code{NULL}, the object's pointer will be used as the hash, which will @emph{fail} if the object has an @code{equal} method, so don't do this. To hash a sub-Lisp-object, call @code{internal_hash()}. Bump the depth by one, just like in the ``equal'' method. To convert a Lisp object directly into a hash value (using its pointer), use @code{LISP_HASH()}. This is what happens when the hash method is NULL. To hash two or more values together into a single value, use @code{HASH2()}, @code{HASH3()}, @code{HASH4()}, etc. @item @dfn{getprop}, @dfn{putprop}, @dfn{remprop}, and @dfn{plist} methods. These are used for object types that have properties. I don't feel like documenting them here. If you create one of these objects, you have to use different macros to define them, i.e. @code{DEFINE_LRECORD_IMPLEMENTATION_WITH_PROPS()} or @code{DEFINE_LRECORD_SEQUENCE_IMPLEMENTATION_WITH_PROPS()}. @item A @dfn{size_in_bytes} method, when the object is of variable-size. (i.e. declared with a @code{_SEQUENCE_IMPLEMENTATION} macro.) This should simply return the object's size in bytes, exactly as you might expect. For an example, see the methods for window configurations and opaques. @end enumerate @node Low-level allocation, Cons, lrecords, Allocation of Objects in XEmacs Lisp @section Low-level allocation @cindex low-level allocation @cindex allocation, low-level Memory that you want to allocate directly should be allocated using @code{xmalloc()} rather than @code{malloc()}. This implements error-checking on the return value, and once upon a time did some more vital stuff (i.e. @code{BLOCK_INPUT}, which is no longer necessary). Free using @code{xfree()}, and realloc using @code{xrealloc()}. Note that @code{xmalloc()} will do a non-local exit if the memory can't be allocated. (Many functions, however, do not expect this, and thus XEmacs will likely crash if this happens. @strong{This is a bug.} If you can, you should strive to make your function handle this OK. However, it's difficult in the general circumstance, perhaps requiring extra unwind-protects and such.) Note that XEmacs provides two separate replacements for the standard @code{malloc()} library function. These are called @dfn{old GNU malloc} (@file{malloc.c}) and @dfn{new GNU malloc} (@file{gmalloc.c}), respectively. New GNU malloc is better in pretty much every way than old GNU malloc, and should be used if possible. (It used to be that on some systems, the old one worked but the new one didn't. I think this was due specifically to a bug in SunOS, which the new one now works around; so I don't think the old one ever has to be used any more.) The primary difference between both of these mallocs and the standard system malloc is that they are much faster, at the expense of increased space. The basic idea is that memory is allocated in fixed chunks of powers of two. This allows for basically constant malloc time, since the various chunks can just be kept on a number of free lists. (The standard system malloc typically allocates arbitrary-sized chunks and has to spend some time, sometimes a significant amount of time, walking the heap looking for a free block to use and cleaning things up.) The new GNU malloc improves on things by allocating large objects in chunks of 4096 bytes rather than in ever larger powers of two, which results in ever larger wastage. There is a slight speed loss here, but it's of doubtful significance. NOTE: Apparently there is a third-generation GNU malloc that is significantly better than the new GNU malloc, and should probably be included in XEmacs. There is also the relocating allocator, @file{ralloc.c}. This actually moves blocks of memory around so that the @code{sbrk()} pointer shrunk and virtual memory released back to the system. On some systems, this is a big win. On all systems, it causes a noticeable (and sometimes huge) speed penalty, so I turn it off by default. @file{ralloc.c} only works with the new GNU malloc in @file{gmalloc.c}. There are also two versions of @file{ralloc.c}, one that uses @code{mmap()} rather than block copies to move data around. This purports to be faster, although that depends on the amount of data that would have had to be block copied and the system-call overhead for @code{mmap()}. I don't know exactly how this works, except that the relocating-allocation routines are pretty much used only for the memory allocated for a buffer, which is the biggest consumer of space, esp. of space that may get freed later. Note that the GNU mallocs have some ``memory warning'' facilities. XEmacs taps into them and issues a warning through the standard warning system, when memory gets to 75%, 85%, and 95% full. (On some systems, the memory warnings are not functional.) Allocated memory that is going to be used to make a Lisp object is created using @code{allocate_lisp_storage()}. This just calls @code{xmalloc()}. It used to verify that the pointer to the memory can fit into a Lisp word, before the current Lisp object representation was introduced. @code{allocate_lisp_storage()} is called by @code{alloc_lcrecord()}, @code{ALLOCATE_FIXED_TYPE()}, and the vector and bit-vector creation routines. These routines also call @code{INCREMENT_CONS_COUNTER()} at the appropriate times; this keeps statistics on how much memory is allocated, so that garbage-collection can be invoked when the threshold is reached. @node Cons, Vector, Low-level allocation, Allocation of Objects in XEmacs Lisp @section Cons @cindex cons Conses are allocated in standard frob blocks. The only thing to note is that conses can be explicitly freed using @code{free_cons()} and associated functions @code{free_list()} and @code{free_alist()}. This immediately puts the conses onto the cons free list, and decrements the statistics on memory allocation appropriately. This is used to good effect by some extremely commonly-used code, to avoid generating extra objects and thereby triggering GC sooner. However, you have to be @emph{extremely} careful when doing this. If you mess this up, you will get BADLY BURNED, and it has happened before. @node Vector, Bit Vector, Cons, Allocation of Objects in XEmacs Lisp @section Vector @cindex vector As mentioned above, each vector is @code{malloc()}ed individually, and all are threaded through the variable @code{all_vectors}. Vectors are marked strangely during garbage collection, by kludging the size field. Note that the @code{struct Lisp_Vector} is declared with its @code{contents} field being a @emph{stretchy} array of one element. It is actually @code{malloc()}ed with the right size, however, and access to any element through the @code{contents} array works fine. @node Bit Vector, Symbol, Vector, Allocation of Objects in XEmacs Lisp @section Bit Vector @cindex bit vector @cindex vector, bit Bit vectors work exactly like vectors, except for more complicated code to access an individual bit, and except for the fact that bit vectors are lrecords while vectors are not. (The only difference here is that there's an lrecord implementation pointer at the beginning and the tag field in bit vector Lisp words is ``lrecord'' rather than ``vector''.) @node Symbol, Marker, Bit Vector, Allocation of Objects in XEmacs Lisp @section Symbol @cindex symbol Symbols are also allocated in frob blocks. Symbols in the awful horrible obarray structure are chained through their @code{next} field. Remember that @code{intern} looks up a symbol in an obarray, creating one if necessary. @node Marker, String, Symbol, Allocation of Objects in XEmacs Lisp @section Marker @cindex marker Markers are allocated in frob blocks, as usual. They are kept in a buffer unordered, but in a doubly-linked list so that they can easily be removed. (Formerly this was a singly-linked list, but in some cases garbage collection took an extraordinarily long time due to the O(N^2) time required to remove lots of markers from a buffer.) Markers are removed from a buffer in the finalize stage, in @code{ADDITIONAL_FREE_marker()}. @node String, Compiled Function, Marker, Allocation of Objects in XEmacs Lisp @section String @cindex string As mentioned above, strings are a special case. A string is logically two parts, a fixed-size object (containing the length, property list, and a pointer to the actual data), and the actual data in the string. The fixed-size object is a @code{struct Lisp_String} and is allocated in frob blocks, as usual. The actual data is stored in special @dfn{string-chars blocks}, which are 8K blocks of memory. Currently-allocated strings are simply laid end to end in these string-chars blocks, with a pointer back to the @code{struct Lisp_String} stored before each string in the string-chars block. When a new string needs to be allocated, the remaining space at the end of the last string-chars block is used if there's enough, and a new string-chars block is created otherwise. There are never any holes in the string-chars blocks due to the string compaction and relocation that happens at the end of garbage collection. During the sweep stage of garbage collection, when objects are reclaimed, the garbage collector goes through all string-chars blocks, looking for unused strings. Each chunk of string data is preceded by a pointer to the corresponding @code{struct Lisp_String}, which indicates both whether the string is used and how big the string is, i.e. how to get to the next chunk of string data. Holes are compressed by block-copying the next string into the empty space and relocating the pointer stored in the corresponding @code{struct Lisp_String}. @strong{This means you have to be careful with strings in your code.} See the section above on @code{GCPRO}ing. Note that there is one situation not handled: a string that is too big to fit into a string-chars block. Such strings, called @dfn{big strings}, are all @code{malloc()}ed as their own block. (#### Although it would make more sense for the threshold for big strings to be somewhat lower, e.g. 1/2 or 1/4 the size of a string-chars block. It seems that this was indeed the case formerly---indeed, the threshold was set at 1/8---but Mly forgot about this when rewriting things for 19.8.) Note also that the string data in string-chars blocks is padded as necessary so that proper alignment constraints on the @code{struct Lisp_String} back pointers are maintained. Finally, strings can be resized. This happens in Mule when a character is substituted with a different-length character, or during modeline frobbing. (You could also export this to Lisp, but it's not done so currently.) Resizing a string is a potentially tricky process. If the change is small enough that the padding can absorb it, nothing other than a simple memory move needs to be done. Keep in mind, however, that the string can't shrink too much because the offset to the next string in the string-chars block is computed by looking at the length and rounding to the nearest multiple of four or eight. If the string would shrink or expand beyond the correct padding, new string data needs to be allocated at the end of the last string-chars block and the data moved appropriately. This leaves some dead string data, which is marked by putting a special marker of 0xFFFFFFFF in the @code{struct Lisp_String} pointer before the data (there's no real @code{struct Lisp_String} to point to and relocate), and storing the size of the dead string data (which would normally be obtained from the now-non-existent @code{struct Lisp_String}) at the beginning of the dead string data gap. The string compactor recognizes this special 0xFFFFFFFF marker and handles it correctly. @node Compiled Function, , String, Allocation of Objects in XEmacs Lisp @section Compiled Function @cindex compiled function @cindex function, compiled Not yet documented. @node The Lisp Reader and Compiler, Evaluation; Stack Frames; Bindings, Allocation of Objects in XEmacs Lisp, Top @chapter The Lisp Reader and Compiler @cindex Lisp reader and compiler, the @cindex reader and compiler, the Lisp @cindex compiler, the Lisp reader and Not yet documented. @node Evaluation; Stack Frames; Bindings, Symbols and Variables, The Lisp Reader and Compiler, Top @chapter Evaluation; Stack Frames; Bindings @cindex evaluation; stack frames; bindings @cindex stack frames; bindings, evaluation; @cindex bindings, evaluation; stack frames; @menu * Evaluation:: * Dynamic Binding; The specbinding Stack; Unwind-Protects:: * Simple Special Forms:: * Catch and Throw:: * Error Trapping:: @end menu @node Evaluation, Dynamic Binding; The specbinding Stack; Unwind-Protects, Evaluation; Stack Frames; Bindings, Evaluation; Stack Frames; Bindings @section Evaluation @cindex evaluation @code{Feval()} evaluates the form (a Lisp object) that is passed to it. Note that evaluation is only non-trivial for two types of objects: symbols and conses. A symbol is evaluated simply by calling @code{symbol-value} on it and returning the value. Evaluating a cons means calling a function. First, @code{eval} checks to see if garbage-collection is necessary, and calls @code{garbage_collect_1()} if so. It then increases the evaluation depth by 1 (@code{lisp_eval_depth}, which is always less than @code{max_lisp_eval_depth}) and adds an element to the linked list of @code{struct backtrace}'s (@code{backtrace_list}). Each such structure contains a pointer to the function being called plus a list of the function's arguments. Originally these values are stored unevalled, and as they are evaluated, the backtrace structure is updated. Garbage collection pays attention to the objects pointed to in the backtrace structures (garbage collection might happen while a function is being called or while an argument is being evaluated, and there could easily be no other references to the arguments in the argument list; once an argument is evaluated, however, the unevalled version is not needed by eval, and so the backtrace structure is changed). At this point, the function to be called is determined by looking at the car of the cons (if this is a symbol, its function definition is retrieved and the process repeated). The function should then consist of either a @code{Lisp_Subr} (built-in function written in C), a @code{Lisp_Compiled_Function} object, or a cons whose car is one of the symbols @code{autoload}, @code{macro} or @code{lambda}. If the function is a @code{Lisp_Subr}, the lisp object points to a @code{struct Lisp_Subr} (created by @code{DEFUN()}), which contains a pointer to the C function, a minimum and maximum number of arguments (or possibly the special constants @code{MANY} or @code{UNEVALLED}), a pointer to the symbol referring to that subr, and a couple of other things. If the subr wants its arguments @code{UNEVALLED}, they are passed raw as a list. Otherwise, an array of evaluated arguments is created and put into the backtrace structure, and either passed whole (@code{MANY}) or each argument is passed as a C argument. If the function is a @code{Lisp_Compiled_Function}, @code{funcall_compiled_function()} is called. If the function is a lambda list, @code{funcall_lambda()} is called. If the function is a macro, [..... fill in] is done. If the function is an autoload, @code{do_autoload()} is called to load the definition and then eval starts over [explain this more]. When @code{Feval()} exits, the evaluation depth is reduced by one, the debugger is called if appropriate, and the current backtrace structure is removed from the list. Both @code{funcall_compiled_function()} and @code{funcall_lambda()} need to go through the list of formal parameters to the function and bind them to the actual arguments, checking for @code{&rest} and @code{&optional} symbols in the formal parameters and making sure the number of actual arguments is correct. @code{funcall_compiled_function()} can do this a little more efficiently, since the formal parameter list can be checked for sanity when the compiled function object is created. @code{funcall_lambda()} simply calls @code{Fprogn} to execute the code in the lambda list. @code{funcall_compiled_function()} calls the real byte-code interpreter @code{execute_optimized_program()} on the byte-code instructions, which are converted into an internal form for faster execution. When a compiled function is executed for the first time by @code{funcall_compiled_function()}, or during the dump phase of building XEmacs, the byte-code instructions are converted from a @code{Lisp_String} (which is inefficient to access, especially in the presence of MULE) into a @code{Lisp_Opaque} object containing an array of unsigned char, which can be directly executed by the byte-code interpreter. At this time the byte code is also analyzed for validity and transformed into a more optimized form, so that @code{execute_optimized_program()} can really fly. Here are some of the optimizations performed by the internal byte-code transformer: @enumerate @item References to the @code{constants} array are checked for out-of-range indices, so that the byte interpreter doesn't have to. @item References to the @code{constants} array that will be used as a Lisp variable are checked for being correct non-constant (i.e. not @code{t}, @code{nil}, or @code{keywordp}) symbols, so that the byte interpreter doesn't have to. @item The maximum number of variable bindings in the byte-code is pre-computed, so that space on the @code{specpdl} stack can be pre-reserved once for the whole function execution. @item All byte-code jumps are relative to the current program counter instead of the start of the program, thereby saving a register. @item One-byte relative jumps are converted from the byte-code form of unsigned chars offset by 127 to machine-friendly signed chars. @end enumerate Of course, this transformation of the @code{instructions} should not be visible to the user, so @code{Fcompiled_function_instructions()} needs to know how to convert the optimized opaque object back into a Lisp string that is identical to the original string from the @file{.elc} file. (Actually, the resulting string may (rarely) contain slightly different, yet equivalent, byte code.) @code{Ffuncall()} implements Lisp @code{funcall}. @code{(funcall fun x1 x2 x3 ...)} is equivalent to @code{(eval (list fun (quote x1) (quote x2) (quote x3) ...))}. @code{Ffuncall()} contains its own code to do the evaluation, however, and is very similar to @code{Feval()}. From the performance point of view, it is worth knowing that most of the time in Lisp evaluation is spent executing @code{Lisp_Subr} and @code{Lisp_Compiled_Function} objects via @code{Ffuncall()} (not @code{Feval()}). @code{Fapply()} implements Lisp @code{apply}, which is very similar to @code{funcall} except that if the last argument is a list, the result is the same as if each of the arguments in the list had been passed separately. @code{Fapply()} does some business to expand the last argument if it's a list, then calls @code{Ffuncall()} to do the work. @code{apply1()}, @code{call0()}, @code{call1()}, @code{call2()}, and @code{call3()} call a function, passing it the argument(s) given (the arguments are given as separate C arguments rather than being passed as an array). @code{apply1()} uses @code{Fapply()} while the others use @code{Ffuncall()} to do the real work. @node Dynamic Binding; The specbinding Stack; Unwind-Protects, Simple Special Forms, Evaluation, Evaluation; Stack Frames; Bindings @section Dynamic Binding; The specbinding Stack; Unwind-Protects @cindex dynamic binding; the specbinding stack; unwind-protects @cindex binding; the specbinding stack; unwind-protects, dynamic @cindex specbinding stack; unwind-protects, dynamic binding; the @cindex unwind-protects, dynamic binding; the specbinding stack; @example struct specbinding @{ Lisp_Object symbol; Lisp_Object old_value; Lisp_Object (*func) (Lisp_Object); /* for unwind-protect */ @}; @end example @code{struct specbinding} is used for local-variable bindings and unwind-protects. @code{specpdl} holds an array of @code{struct specbinding}'s, @code{specpdl_ptr} points to the beginning of the free bindings in the array, @code{specpdl_size} specifies the total number of binding slots in the array, and @code{max_specpdl_size} specifies the maximum number of bindings the array can be expanded to hold. @code{grow_specpdl()} increases the size of the @code{specpdl} array, multiplying its size by 2 but never exceeding @code{max_specpdl_size} (except that if this number is less than 400, it is first set to 400). @code{specbind()} binds a symbol to a value and is used for local variables and @code{let} forms. The symbol and its old value (which might be @code{Qunbound}, indicating no prior value) are recorded in the specpdl array, and @code{specpdl_size} is increased by 1. @code{record_unwind_protect()} implements an @dfn{unwind-protect}, which, when placed around a section of code, ensures that some specified cleanup routine will be executed even if the code exits abnormally (e.g. through a @code{throw} or quit). @code{record_unwind_protect()} simply adds a new specbinding to the @code{specpdl} array and stores the appropriate information in it. The cleanup routine can either be a C function, which is stored in the @code{func} field, or a @code{progn} form, which is stored in the @code{old_value} field. @code{unbind_to()} removes specbindings from the @code{specpdl} array until the specified position is reached. Each specbinding can be one of three types: @enumerate @item an unwind-protect with a C cleanup function (@code{func} is not 0, and @code{old_value} holds an argument to be passed to the function); @item an unwind-protect with a Lisp form (@code{func} is 0, @code{symbol} is @code{nil}, and @code{old_value} holds the form to be executed with @code{Fprogn()}); or @item a local-variable binding (@code{func} is 0, @code{symbol} is not @code{nil}, and @code{old_value} holds the old value, which is stored as the symbol's value). @end enumerate @node Simple Special Forms, Catch and Throw, Dynamic Binding; The specbinding Stack; Unwind-Protects, Evaluation; Stack Frames; Bindings @section Simple Special Forms @cindex special forms, simple @code{or}, @code{and}, @code{if}, @code{cond}, @code{progn}, @code{prog1}, @code{prog2}, @code{setq}, @code{quote}, @code{function}, @code{let*}, @code{let}, @code{while} All of these are very simple and work as expected, calling @code{Feval()} or @code{Fprogn()} as necessary and (in the case of @code{let} and @code{let*}) using @code{specbind()} to create bindings and @code{unbind_to()} to undo the bindings when finished. Note that, with the exception of @code{Fprogn}, these functions are typically called in real life only in interpreted code, since the byte compiler knows how to convert calls to these functions directly into byte code. @node Catch and Throw, Error Trapping, Simple Special Forms, Evaluation; Stack Frames; Bindings @section Catch and Throw @cindex catch and throw @cindex throw, catch and @example struct catchtag @{ Lisp_Object tag; Lisp_Object val; struct catchtag *next; struct gcpro *gcpro; jmp_buf jmp; struct backtrace *backlist; int lisp_eval_depth; int pdlcount; @}; @end example @code{catch} is a Lisp function that places a catch around a body of code. A catch is a means of non-local exit from the code. When a catch is created, a tag is specified, and executing a @code{throw} to this tag will exit from the body of code caught with this tag, and its value will be the value given in the call to @code{throw}. If there is no such call, the code will be executed normally. Information pertaining to a catch is held in a @code{struct catchtag}, which is placed at the head of a linked list pointed to by @code{catchlist}. @code{internal_catch()} is passed a C function to call (@code{Fprogn()} when Lisp @code{catch} is called) and arguments to give it, and places a catch around the function. Each @code{struct catchtag} is held in the stack frame of the @code{internal_catch()} instance that created the catch. @code{internal_catch()} is fairly straightforward. It stores into the @code{struct catchtag} the tag name and the current values of @code{backtrace_list}, @code{lisp_eval_depth}, @code{gcprolist}, and the offset into the @code{specpdl} array, sets a jump point with @code{_setjmp()} (storing the jump point into the @code{struct catchtag}), and calls the function. Control will return to @code{internal_catch()} either when the function exits normally or through a @code{_longjmp()} to this jump point. In the latter case, @code{throw} will store the value to be returned into the @code{struct catchtag} before jumping. When it's done, @code{internal_catch()} removes the @code{struct catchtag} from the catchlist and returns the proper value. @code{Fthrow()} goes up through the catchlist until it finds one with a matching tag. It then calls @code{unbind_catch()} to restore everything to what it was when the appropriate catch was set, stores the return value in the @code{struct catchtag}, and jumps (with @code{_longjmp()}) to its jump point. @code{unbind_catch()} removes all catches from the catchlist until it finds the correct one. Some of the catches might have been placed for error-trapping, and if so, the appropriate entries on the handlerlist must be removed (see ``errors''). @code{unbind_catch()} also restores the values of @code{gcprolist}, @code{backtrace_list}, and @code{lisp_eval}, and calls @code{unbind_to()} to undo any specbindings created since the catch. @node Error Trapping, , Catch and Throw, Evaluation; Stack Frames; Bindings @section Error Trapping @cindex error trapping @subheading call_trapping_problems(): This is equivalent to (*fun) (arg), except that various conditions can be trapped or inhibited, according to FLAGS. @itemize @bullet @item If FLAGS does not contain NO_INHIBIT_ERRORS, when an error occurs, the error is caught and a warning is issued, specifying the specific error that occurred and a backtrace. In that case, WARNING_STRING should be given, and will be printed at the beginning of the error to indicate where the error occurred. @item If FLAGS does not contain NO_INHIBIT_THROWS, all attempts to @code{throw} out of the function being called are trapped, and a warning issued. (Again, WARNING_STRING should be given.) @item If FLAGS contains INHIBIT_WARNING_ISSUE, no warnings are issued; this applies to recursive invocations of call_trapping_problems, too. @item If FLAGS contains POSTPONE_WARNING_ISSUE, no warnings are issued; but values useful for generating a warning are still computed (in particular, the backtrace), so that the calling function can issue a warning. @item If FLAGS contains ISSUE_WARNINGS_AT_DEBUG_LEVEL, warnings will be issued, but at level @code{debug}, which normally is below the minimum specified by @code{log-warning-minimum-level}, meaning such warnings will be ignored entirely. The user can change this variable, however, to see the warnings.) Note: If neither of NO_INHIBIT_THROWS or NO_INHIBIT_ERRORS is given, you are @strong{guaranteed} that there will be no non-local exits out of this function. @item If FLAGS contains INHIBIT_QUIT, QUIT using C-g is inhibited. (This is @strong{rarely} a good idea. Unless you use NO_INHIBIT_ERRORS, QUIT is automatically caught as well, and treated as an error; you can check for this using EQ (problems->error_conditions, Qquit). @item If FLAGS contains UNINHIBIT_QUIT, QUIT checking will be explicitly turned on. (It will abort the code being called, but will still be trapped and reported as an error, unless NO_INHIBIT_ERRORS is given.) This is useful when QUIT checking has been turned off by a higher-level caller. @item If FLAGS contains INHIBIT_GC, garbage collection is inhibited. This is useful for Lisp called within redisplay, for example. @item If FLAGS contains INHIBIT_EXISTING_PERMANENT_DISPLAY_OBJECT_DELETION, Lisp code is not allowed to delete any window, buffers, frames, devices, or consoles that were already in existence at the time this function was called. (However, it's perfectly legal for code to create a new buffer and then delete it.) #### It might be useful to have a flag that inhibits deletion of a specific permanent display object and everything it's attached to (e.g. a window, and the buffer, frame, device, and console it's attached to. @item If FLAGS contains INHIBIT_EXISTING_BUFFER_TEXT_MODIFICATION, Lisp code is not allowed to modify the text of any buffers that were already in existence at the time this function was called. (However, it's perfectly legal for code to create a new buffer and then modify its text.) @quotation [These last two flags are implemented using global variables Vdeletable_permanent_display_objects and Vmodifiable_buffers, which keep track of a list of all buffers or permanent display objects created since the last time one of these flags was set. The code that deletes buffers, etc. and modifies buffers checks @enumerate @item if the corresponding flag is set (through the global variable inhibit_flags or its accessor function get_inhibit_flags()), and @item if the object to be modified or deleted is not in the appropriate list. @end enumerate If so, it signals an error. Recursive calls to call_trapping_problems() are allowed. In the case of the two flags mentioned above, the current values of the global variables are stored in an unwind-protect, and they're reset to nil.] @end quotation @item If FLAGS contains INHIBIT_ENTERING_DEBUGGER, the debugger will not be entered if an error occurs inside the Lisp code being called, even when the user has requested an error. In such case, a warning is issued stating that access to the debugger is denied, unless INHIBIT_WARNING_ISSUE has also been supplied. This is useful when calling Lisp code inside redisplay, in menu callbacks, etc. because in such cases either the display is in an inconsistent state or doing window operations is explicitly forbidden by the OS, and the debugger would causes visual changes on the screen and might create another frame. @item If FLAGS contains INHIBIT_ANY_CHANGE_AFFECTING_REDISPLAY, no changes of any sort to extents, faces, glyphs, buffer text, specifiers relating to display, other variables relating to display, splitting, deleting, or resizing windows or frames, deleting buffers, windows, frames, devices, or consoles, etc. is allowed. This is for things called absolutely in the middle of redisplay, which expects things to be @strong{exactly} the same after the call as before. This isn't completely implemented and needs to be thought out some more to determine exactly what its semantics are. For the moment, turning on this flag also turns on @itemize @minus @item INHIBIT_EXISTING_PERMANENT_DISPLAY_OBJECT_DELETION @item INHIBIT_EXISTING_BUFFER_TEXT_MODIFICATION @item INHIBIT_ENTERING_DEBUGGER @item INHIBIT_WARNING_ISSUE @item INHIBIT_GC @end itemize @item #### The following five flags are defined, but unimplemented: #define INHIBIT_EXISTING_CODING_SYSTEM_DELETION (1<<6) #define INHIBIT_EXISTING_CHARSET_DELETION (1<<7) #define INHIBIT_PERMANENT_DISPLAY_OBJECT_CREATION (1<<8) #define INHIBIT_CODING_SYSTEM_CREATION (1<<9) #define INHIBIT_CHARSET_CREATION (1<<10) @item FLAGS containing CALL_WITH_SUSPENDED_ERRORS is a sign that call_with_suspended_errors() was invoked. This exists only for debugging purposes -- often we want to break when a signal happens, but ignore signals from call_with_suspended_errors(), because they occur often and for legitimate reasons. @end itemize If PROBLEM is non-zero, it should be a pointer to a structure into which exact information about any occurring problems (either an error or an attempted throw past this boundary). If a problem occurred and aborted operation (error, quit, or invalid throw), Qunbound is returned. Otherwise the return value from the call to (*fun) (arg) is returned. @node Symbols and Variables, Buffers, Evaluation; Stack Frames; Bindings, Top @chapter Symbols and Variables @cindex symbols and variables @cindex variables, symbols and @menu * Introduction to Symbols:: * Obarrays:: * Symbol Values:: @end menu @node Introduction to Symbols, Obarrays, Symbols and Variables, Symbols and Variables @section Introduction to Symbols @cindex symbols, introduction to A symbol is basically just an object with four fields: a name (a string), a value (some Lisp object), a function (some Lisp object), and a property list (usually a list of alternating keyword/value pairs). What makes symbols special is that there is usually only one symbol with a given name, and the symbol is referred to by name. This makes a symbol a convenient way of calling up data by name, i.e. of implementing variables. (The variable's value is stored in the @dfn{value slot}.) Similarly, functions are referenced by name, and the definition of the function is stored in a symbol's @dfn{function slot}. This means that there can be a distinct function and variable with the same name. The property list is used as a more general mechanism of associating additional values with particular names, and once again the namespace is independent of the function and variable namespaces. @node Obarrays, Symbol Values, Introduction to Symbols, Symbols and Variables @section Obarrays @cindex obarrays The identity of symbols with their names is accomplished through a structure called an obarray, which is just a poorly-implemented hash table mapping from strings to symbols whose name is that string. (I say ``poorly implemented'' because an obarray appears in Lisp as a vector with some hidden fields rather than as its own opaque type. This is an Emacs Lisp artifact that should be fixed.) Obarrays are implemented as a vector of some fixed size (which should be a prime for best results), where each ``bucket'' of the vector contains one or more symbols, threaded through a hidden @code{next} field in the symbol. Lookup of a symbol in an obarray, and adding a symbol to an obarray, is accomplished through standard hash-table techniques. The standard Lisp function for working with symbols and obarrays is @code{intern}. This looks up a symbol in an obarray given its name; if it's not found, a new symbol is automatically created with the specified name, added to the obarray, and returned. This is what happens when the Lisp reader encounters a symbol (or more precisely, encounters the name of a symbol) in some text that it is reading. There is a standard obarray called @code{obarray} that is used for this purpose, although the Lisp programmer is free to create his own obarrays and @code{intern} symbols in them. Note that, once a symbol is in an obarray, it stays there until something is done about it, and the standard obarray @code{obarray} always stays around, so once you use any particular variable name, a corresponding symbol will stay around in @code{obarray} until you exit XEmacs. Note that @code{obarray} itself is a variable, and as such there is a symbol in @code{obarray} whose name is @code{"obarray"} and which contains @code{obarray} as its value. Note also that this call to @code{intern} occurs only when in the Lisp reader, not when the code is executed (at which point the symbol is already around, stored as such in the definition of the function). You can create your own obarray using @code{make-vector} (this is horrible but is an artifact) and intern symbols into that obarray. Doing that will result in two or more symbols with the same name. However, at most one of these symbols is in the standard @code{obarray}: You cannot have two symbols of the same name in any particular obarray. Note that you cannot add a symbol to an obarray in any fashion other than using @code{intern}: i.e. you can't take an existing symbol and put it in an existing obarray. Nor can you change the name of an existing symbol. (Since obarrays are vectors, you can violate the consistency of things by storing directly into the vector, but let's ignore that possibility.) Usually symbols are created by @code{intern}, but if you really want, you can explicitly create a symbol using @code{make-symbol}, giving it some name. The resulting symbol is not in any obarray (i.e. it is @dfn{uninterned}), and you can't add it to any obarray. Therefore its primary purpose is as a symbol to use in macros to avoid namespace pollution. It can also be used as a carrier of information, but cons cells could probably be used just as well. You can also use @code{intern-soft} to look up a symbol but not create a new one, and @code{unintern} to remove a symbol from an obarray. This returns the removed symbol. (Remember: You can't put the symbol back into any obarray.) Finally, @code{mapatoms} maps over all of the symbols in an obarray. @node Symbol Values, , Obarrays, Symbols and Variables @section Symbol Values @cindex symbol values @cindex values, symbol The value field of a symbol normally contains a Lisp object. However, a symbol can be @dfn{unbound}, meaning that it logically has no value. This is internally indicated by storing a special Lisp object, called @dfn{the unbound marker} and stored in the global variable @code{Qunbound}. The unbound marker is of a special Lisp object type called @dfn{symbol-value-magic}. It is impossible for the Lisp programmer to directly create or access any object of this type. @strong{You must not let any ``symbol-value-magic'' object escape to the Lisp level.} Printing any of these objects will cause the message @samp{INTERNAL EMACS BUG} to appear as part of the print representation. (You may see this normally when you call @code{debug_print()} from the debugger on a Lisp object.) If you let one of these objects escape to the Lisp level, you will violate a number of assumptions contained in the C code and make the unbound marker not function right. When a symbol is created, its value field (and function field) are set to @code{Qunbound}. The Lisp programmer can restore these conditions later using @code{makunbound} or @code{fmakunbound}, and can query to see whether the value of function fields are @dfn{bound} (i.e. have a value other than @code{Qunbound}) using @code{boundp} and @code{fboundp}. The fields are set to a normal Lisp object using @code{set} (or @code{setq}) and @code{fset}. Other symbol-value-magic objects are used as special markers to indicate variables that have non-normal properties. This includes any variables that are tied into C variables (setting the variable magically sets some global variable in the C code, and likewise for retrieving the variable's value), variables that magically tie into slots in the current buffer, variables that are buffer-local, etc. The symbol-value-magic object is stored in the value cell in place of a normal object, and the code to retrieve a symbol's value (i.e. @code{symbol-value}) knows how to do special things with them. This means that you should not just fetch the value cell directly if you want a symbol's value. The exact workings of this are rather complex and involved and are well-documented in comments in @file{buffer.c}, @file{symbols.c}, and @file{lisp.h}. @node Buffers, Text, Symbols and Variables, Top @chapter Buffers @cindex buffers @menu * Introduction to Buffers:: A buffer holds a block of text such as a file. * Buffer Lists:: Keeping track of all buffers. * Markers and Extents:: Tagging locations within a buffer. * The Buffer Object:: The Lisp object corresponding to a buffer. @end menu @node Introduction to Buffers, Buffer Lists, Buffers, Buffers @section Introduction to Buffers @cindex buffers, introduction to A buffer is logically just a Lisp object that holds some text. In this, it is like a string, but a buffer is optimized for frequent insertion and deletion, while a string is not. Furthermore: @enumerate @item Buffers are @dfn{permanent} objects, i.e. once you create them, they remain around, and need to be explicitly deleted before they go away. @item Each buffer has a unique name, which is a string. Buffers are normally referred to by name. In this respect, they are like symbols. @item Buffers have a default insertion position, called @dfn{point}. Inserting text (unless you explicitly give a position) goes at point, and moves point forward past the text. This is what is going on when you type text into Emacs. @item Buffers have lots of extra properties associated with them. @item Buffers can be @dfn{displayed}. What this means is that there exist a number of @dfn{windows}, which are objects that correspond to some visible section of your display, and each window has an associated buffer, and the current contents of the buffer are shown in that section of the display. The redisplay mechanism (which takes care of doing this) knows how to look at the text of a buffer and come up with some reasonable way of displaying this. Many of the properties of a buffer control how the buffer's text is displayed. @item One buffer is distinguished and called the @dfn{current buffer}. It is stored in the variable @code{current_buffer}. Buffer operations operate on this buffer by default. When you are typing text into a buffer, the buffer you are typing into is always @code{current_buffer}. Switching to a different window changes the current buffer. Note that Lisp code can temporarily change the current buffer using @code{set-buffer} (often enclosed in a @code{save-excursion} so that the former current buffer gets restored when the code is finished). However, calling @code{set-buffer} will NOT cause a permanent change in the current buffer. The reason for this is that the top-level event loop sets @code{current_buffer} to the buffer of the selected window, each time it finishes executing a user command. @end enumerate Make sure you understand the distinction between @dfn{current buffer} and @dfn{buffer of the selected window}, and the distinction between @dfn{point} of the current buffer and @dfn{window-point} of the selected window. (This latter distinction is explained in detail in the section on windows.) @node Buffer Lists, Markers and Extents, Introduction to Buffers, Buffers @section Buffer Lists @cindex buffer lists Recall earlier that buffers are @dfn{permanent} objects, i.e. that they remain around until explicitly deleted. This entails that there is a list of all the buffers in existence. This list is actually an assoc-list (mapping from the buffer's name to the buffer) and is stored in the global variable @code{Vbuffer_alist}. The order of the buffers in the list is important: the buffers are ordered approximately from most-recently-used to least-recently-used. Switching to a buffer using @code{switch-to-buffer}, @code{pop-to-buffer}, etc. and switching windows using @code{other-window}, etc. usually brings the new current buffer to the front of the list. @code{switch-to-buffer}, @code{other-buffer}, etc. look at the beginning of the list to find an alternative buffer to suggest. You can also explicitly move a buffer to the end of the list using @code{bury-buffer}. In addition to the global ordering in @code{Vbuffer_alist}, each frame has its own ordering of the list. These lists always contain the same elements as in @code{Vbuffer_alist} although possibly in a different order. @code{buffer-list} normally returns the list for the selected frame. This allows you to work in separate frames without things interfering with each other. The standard way to look up a buffer given a name is @code{get-buffer}, and the standard way to create a new buffer is @code{get-buffer-create}, which looks up a buffer with a given name, creating a new one if necessary. These operations correspond exactly with the symbol operations @code{intern-soft} and @code{intern}, respectively. You can also force a new buffer to be created using @code{generate-new-buffer}, which takes a name and (if necessary) makes a unique name from this by appending a number, and then creates the buffer. This is basically like the symbol operation @code{gensym}. @node Markers and Extents, The Buffer Object, Buffer Lists, Buffers @section Markers and Extents @cindex markers and extents @cindex extents, markers and Among the things associated with a buffer are things that are logically attached to certain buffer positions. This can be used to keep track of a buffer position when text is inserted and deleted, so that it remains at the same spot relative to the text around it; to assign properties to particular sections of text; etc. There are two such objects that are useful in this regard: they are @dfn{markers} and @dfn{extents}. A @dfn{marker} is simply a flag placed at a particular buffer position, which is moved around as text is inserted and deleted. Markers are used for all sorts of purposes, such as the @code{mark} that is the other end of textual regions to be cut, copied, etc. An @dfn{extent} is similar to two markers plus some associated properties, and is used to keep track of regions in a buffer as text is inserted and deleted, and to add properties (e.g. fonts) to particular regions of text. The external interface of extents is explained elsewhere. The important thing here is that markers and extents simply contain buffer positions in them as integers, and every time text is inserted or deleted, these positions must be updated. In order to minimize the amount of shuffling that needs to be done, the positions in markers and extents (there's one per marker, two per extent) are stored in Membpos's. This means that they only need to be moved when the text is physically moved in memory; since the gap structure tries to minimize this, it also minimizes the number of marker and extent indices that need to be adjusted. Look in @file{insdel.c} for the details of how this works. One other important distinction is that markers are @dfn{temporary} while extents are @dfn{permanent}. This means that markers disappear as soon as there are no more pointers to them, and correspondingly, there is no way to determine what markers are in a buffer if you are just given the buffer. Extents remain in a buffer until they are detached (which could happen as a result of text being deleted) or the buffer is deleted, and primitives do exist to enumerate the extents in a buffer. @node The Buffer Object, , Markers and Extents, Buffers @section The Buffer Object @cindex buffer object, the @cindex object, the buffer Buffers contain fields not directly accessible by the Lisp programmer. We describe them here, naming them by the names used in the C code. Many are accessible indirectly in Lisp programs via Lisp primitives. @table @code @item name The buffer name is a string that names the buffer. It is guaranteed to be unique. @xref{Buffer Names,,, lispref, XEmacs Lisp Reference Manual}. @item save_modified This field contains the time when the buffer was last saved, as an integer. @xref{Buffer Modification,,, lispref, XEmacs Lisp Reference Manual}. @item modtime This field contains the modification time of the visited file. It is set when the file is written or read. Every time the buffer is written to the file, this field is compared to the modification time of the file. @xref{Buffer Modification,,, lispref, XEmacs Lisp Reference Manual}. @item auto_save_modified This field contains the time when the buffer was last auto-saved. @item last_window_start This field contains the @code{window-start} position in the buffer as of the last time the buffer was displayed in a window. @item undo_list This field points to the buffer's undo list. @xref{Undo,,, lispref, XEmacs Lisp Reference Manual}. @item syntax_table_v This field contains the syntax table for the buffer. @xref{Syntax Tables,,, lispref, XEmacs Lisp Reference Manual}. @item downcase_table This field contains the conversion table for converting text to lower case. @xref{Case Tables,,, lispref, XEmacs Lisp Reference Manual}. @item upcase_table This field contains the conversion table for converting text to upper case. @xref{Case Tables,,, lispref, XEmacs Lisp Reference Manual}. @item case_canon_table This field contains the conversion table for canonicalizing text for case-folding search. @xref{Case Tables,,, lispref, XEmacs Lisp Reference Manual}. @item case_eqv_table This field contains the equivalence table for case-folding search. @xref{Case Tables,,, lispref, XEmacs Lisp Reference Manual}. @item display_table This field contains the buffer's display table, or @code{nil} if it doesn't have one. @xref{Display Tables,,, lispref, XEmacs Lisp Reference Manual}. @item markers This field contains the chain of all markers that currently point into the buffer. Deletion of text in the buffer, and motion of the buffer's gap, must check each of these markers and perhaps update it. @xref{Markers,,, lispref, XEmacs Lisp Reference Manual}. @item backed_up This field is a flag that tells whether a backup file has been made for the visited file of this buffer. @item mark This field contains the mark for the buffer. The mark is a marker, hence it is also included on the list @code{markers}. @xref{The Mark,,, lispref, XEmacs Lisp Reference Manual}. @item mark_active This field is non-@code{nil} if the buffer's mark is active. @item local_var_alist This field contains the association list describing the variables local in this buffer, and their values, with the exception of local variables that have special slots in the buffer object. (Those slots are omitted from this table.) @xref{Buffer-Local Variables,,, lispref, XEmacs Lisp Reference Manual}. @item modeline_format This field contains a Lisp object which controls how to display the mode line for this buffer. @xref{Modeline Format,,, lispref, XEmacs Lisp Reference Manual}. @item base_buffer This field holds the buffer's base buffer (if it is an indirect buffer), or @code{nil}. @end table @node Text, Multilingual Support, Buffers, Top @chapter Text @cindex text @menu * The Text in a Buffer:: Representation of the text in a buffer. * Ibytes and Ichars:: Representation of individual characters. * Byte-Char Position Conversion:: * Searching and Matching:: Higher-level algorithms. @end menu @node The Text in a Buffer, Ibytes and Ichars, Text, Text @section The Text in a Buffer @cindex text in a buffer, the @cindex buffer, the text in a The text in a buffer consists of a sequence of zero or more characters. A @dfn{character} is an integer that logically represents a letter, number, space, or other unit of text. Most of the characters that you will typically encounter belong to the ASCII set of characters, but there are also characters for various sorts of accented letters, special symbols, Chinese and Japanese ideograms (i.e. Kanji, Katakana, etc.), Cyrillic and Greek letters, etc. The actual number of possible characters is quite large. For now, we can view a character as some non-negative integer that has some shape that defines how it typically appears (e.g. as an uppercase A). (The exact way in which a character appears depends on the font used to display the character.) The internal type of characters in the C code is an @code{Ichar}; this is just an @code{int}, but using a symbolic type makes the code clearer. Between every character in a buffer is a @dfn{buffer position} or @dfn{character position}. We can speak of the character before or after a particular buffer position, and when you insert a character at a particular position, all characters after that position end up at new positions. When we speak of the character @dfn{at} a position, we really mean the character after the position. (This schizophrenia between a buffer position being ``between'' two characters and ``on'' a character is rampant in Emacs.) Buffer positions are numbered starting at 1. This means that position 1 is before the first character, and position 0 is not valid. If there are N characters in a buffer, then buffer position N+1 is after the last one, and position N+2 is not valid. The internal makeup of the Ichar integer varies depending on whether we have compiled with MULE support. If not, the Ichar integer is an 8-bit integer with possible values from 0 - 255. 0 - 127 are the standard ASCII characters, while 128 - 255 are the characters from the ISO-8859-1 character set. If we have compiled with MULE support, an Ichar is a 19-bit integer, with the various bits having meanings according to a complex scheme that will be detailed later. The characters numbered 0 - 255 still have the same meanings as for the non-MULE case, though. Internally, the text in a buffer is represented in a fairly simple fashion: as a contiguous array of bytes, with a @dfn{gap} of some size in the middle. Although the gap is of some substantial size in bytes, there is no text contained within it: From the perspective of the text in the buffer, it does not exist. The gap logically sits at some buffer position, between two characters (or possibly at the beginning or end of the buffer). Insertion of text in a buffer at a particular position is always accomplished by first moving the gap to that position (i.e. through some block moving of text), then writing the text into the beginning of the gap, thereby shrinking the gap. If the gap shrinks down to nothing, a new gap is created. (What actually happens is that a new gap is ``created'' at the end of the buffer's text, which requires nothing more than changing a couple of indices; then the gap is ``moved'' to the position where the insertion needs to take place by moving up in memory all the text after that position.) Similarly, deletion occurs by moving the gap to the place where the text is to be deleted, and then simply expanding the gap to include the deleted text. (@dfn{Expanding} and @dfn{shrinking} the gap as just described means just that the internal indices that keep track of where the gap is located are changed.) Note that the total amount of memory allocated for a buffer text never decreases while the buffer is live. Therefore, if you load up a 20-megabyte file and then delete all but one character, there will be a 20-megabyte gap, which won't get any smaller (except by inserting characters back again). Once the buffer is killed, the memory allocated for the buffer text will be freed, but it will still be sitting on the heap, taking up virtual memory, and will not be released back to the operating system. (However, if you have compiled XEmacs with rel-alloc, the situation is different. In this case, the space @emph{will} be released back to the operating system. However, this tends to result in a noticeable speed penalty.) Astute readers may notice that the text in a buffer is represented as an array of @emph{bytes}, while (at least in the MULE case) an Ichar is a 19-bit integer, which clearly cannot fit in a byte. This means (of course) that the text in a buffer uses a different representation from an Ichar: specifically, the 19-bit Ichar becomes a series of one to four bytes. The conversion between these two representations is complex and will be described later. In the non-MULE case, everything is very simple: An Ichar is an 8-bit value, which fits neatly into one byte. If we are given a buffer position and want to retrieve the character at that position, we need to follow these steps: @enumerate @item Pretend there's no gap, and convert the buffer position into a @dfn{byte index} that indexes to the appropriate byte in the buffer's stream of textual bytes. By convention, byte indices begin at 1, just like buffer positions. In the non-MULE case, byte indices and buffer positions are identical, since one character equals one byte. @item Convert the byte index into a @dfn{memory index}, which takes the gap into account. The memory index is a direct index into the block of memory that stores the text of a buffer. This basically just involves checking to see if the byte index is past the gap, and if so, adding the size of the gap to it. By convention, memory indices begin at 1, just like buffer positions and byte indices, and when referring to the position that is @dfn{at} the gap, we always use the memory position at the @emph{beginning}, not at the end, of the gap. @item Fetch the appropriate bytes at the determined memory position. @item Convert these bytes into an Ichar. @end enumerate In the non-Mule case, (3) and (4) boil down to a simple one-byte memory access. Note that we have defined three types of positions in a buffer: @enumerate @item @dfn{buffer positions} or @dfn{character positions}, typedef @code{Charbpos} @item @dfn{byte indices}, typedef @code{Bytebpos} @item @dfn{memory indices}, typedef @code{Membpos} @end enumerate All three typedefs are just @code{int}s, but defining them this way makes things a lot clearer. Most code works with buffer positions. In particular, all Lisp code that refers to text in a buffer uses buffer positions. Lisp code does not know that byte indices or memory indices exist. Finally, we have a typedef for the bytes in a buffer. This is a @code{Ibyte}, which is an unsigned char. Referring to them as Ibytes underscores the fact that we are working with a string of bytes in the internal Emacs buffer representation rather than in one of a number of possible alternative representations (e.g. EUC-encoded text, etc.). @node Ibytes and Ichars, Byte-Char Position Conversion, The Text in a Buffer, Text @section Ibytes and Ichars @cindex Ibytes and Ichars @cindex Ichars, Ibytes and Not yet documented. @node Byte-Char Position Conversion, Searching and Matching, Ibytes and Ichars, Text @section Byte-Char Position Conversion @cindex byte-char position conversion @cindex position conversion, byte-char @cindex conversion, byte-char position Oct 2004: This is what I wrote when describing the previous algorithm: @quotation The basic algorithm we use is to keep track of a known region of characters in each buffer, all of which are of the same width. We keep track of the boundaries of the region in both Charbpos and Bytebpos coordinates and also keep track of the char width, which is 1 - 4 bytes. If the position we're translating is not in the known region, then we invoke a function to update the known region to surround the position in question. This assumes locality of reference, which is usually the case. Note that the function to update the known region can be simple or complicated depending on how much information we cache. In addition to the known region, we always cache the correct conversions for point, BEGV, and ZV, and in addition to this we cache 16 positions where the conversion is known. We only look in the cache or update it when we need to move the known region more than a certain amount (currently 50 chars), and then we throw away a "random" value and replace it with the newly calculated value. Finally, we maintain an extra flag that tracks whether the buffer is entirely ASCII, to speed up the conversions even more. This flag is actually of dubious value because in an entirely-ASCII buffer the known region will always span the entire buffer (in fact, we update the flag based on this fact), and so all we're saving is a few machine cycles. A potentially smarter method than what we do with known regions and cached positions would be to keep some sort of pseudo-extent layer over the buffer; maybe keep track of the charbpos/bytebpos correspondence at the beginning of each line, which would allow us to do a binary search over the pseudo-extents to narrow things down to the correct line, at which point you could use a linear movement method. This would also mesh well with efficiently implementing a line-numbering scheme. However, you have to weigh the amount of time spent updating the cache vs. the savings that result from it. In reality, we modify the buffer far less often than we access it, so a cache of this sort that provides guaranteed LOG (N) performance (or perhaps N * LOG (N), if we set a maximum on the cache size) would indeed be a win, particularly in very large buffers. If we ever implement this, we should probably set a reasonably high minimum below which we use the old method, because the time spent updating the fancy cache would likely become dominant when making buffer modifications in smaller buffers. Note also that we have to multiply or divide by the char width in order to convert the positions. We do some tricks to avoid ever actually having to do a multiply or divide, because that is typically an expensive operation (esp. divide). Multiplying or dividing by 1, 2, or 4 can be implemented simply as a shift left or shift right, and we keep track of a shifter value (0, 1, or 2) indicating how much to shift. Multiplying by 3 can be implemented by doubling and then adding the original value. Dividing by 3, alas, cannot be implemented in any simple shift/subtract method, as far as I know; so we just do a table lookup. For simplicity, we use a table of size 128K, which indexes the "divide-by-3" values for the first 64K non-negative numbers. (Note that we can increase the size up to 384K, i.e. indexing the first 192K non-negative numbers, while still using shorts in the array.) This also means that the size of the known region can be at most 64K for width-three characters. @end quotation Unfortunately, it turned out that the implementation had serious problems which had never been corrected. In particular, the known region had a large tendency to become zero-length and stay that way. So I decided to port the algorithm from FSF 21.3, in markers.c. This algorithm is fairly simple. Instead of using markers I kept the cache array of known positions from the previous implementation. Basically, we keep a number of positions cached: @itemize @bullet @item the actual end of the buffer @item the beginning and end of the accessible region @item the value of point @item the position of the gap @item the last value we computed @item a set of positions that are "far away" from previously computed positions (5000 chars currently; #### perhaps should be smaller) @end itemize For each position, we @code{CONSIDER()} it. This means: @itemize @bullet @item If the position is what we're looking for, return it directly. @item Starting with the beginning and end of the buffer, we successively compute the smallest enclosing range of known positions. If at any point we discover that this range has the same byte and char length (i.e. is entirely single-byte), then our computation is trivial. @item If at any point we get a small enough range (50 chars currently), stop considering further positions. @end itemize Otherwise, once we have an enclosing range, see which side is closer, and iterate until we find the desired value. As an optimization, I replaced the simple loop in FSF with the use of @code{bytecount_to_charcount()}, @code{charcount_to_bytecount()}, @code{bytecount_to_charcount_down()}, or @code{charcount_to_bytecount_down()}. (The latter two I added for this purpose.) These scan 4 or 8 bytes at a time through purely single-byte characters. If the amount we had to scan was more than our "far away" distance (5000 characters, see above), then cache the new position. #### Things to do: @itemize @bullet @item Look at the most recent GNU Emacs to see whether anything has changed. @item Think about whether it makes sense to try to implement some sort of known region or list of "known regions", like we had before. This would be a region of entirely single-byte characters that we can check very quickly. (Previously I used a range of same-width characters of any size; but this adds extra complexity and slows down the scanning, and is probably not worth it.) As part of the scanning process in @code{bytecount_to_charcount()} et al, we skip over chunks of entirely single-byte chars, so it should be easy to remember the last one. Presumably what we should do is keep track of the largest known surrounding entirely-single-byte region for each of the cache positions as well as perhaps the last-cached position. We want to be careful not to get bitten by the previous problem of having the known region getting reset too often. If we implement this, we might well want to continue scanning some distance past the desired position (maybe 300-1000 bytes) if we are in a single-byte range so that we won't end up expanding the known range one position at a time and entering the function each time. @item Think about whether it makes sense to keep the position cache sorted. This would allow it to be larger and finer-grained in its positions. Note that with FSF's use of markers, they were sorted, but this was not really made good use of. With an array, we can do binary searching to quickly find the smallest range. We would probably want to make use of the gap-array code in extents.c. @end itemize Note that FSF's algorithm checked @strong{ALL} markers, not just the ones cached by this algorithm. This includes markers created by the user as well as both ends of any overlays. We could do similarly, and our extents could keep both byte and character positions rather than just the former. (But this would probably be overkill. We should just use our cache instead. Any place an extent was set was surely already visited by the char<-->byte conversion routines.) @node Searching and Matching, , Byte-Char Position Conversion, Text @section Searching and Matching @cindex searching @cindex matching Very incomplete, limited to a brief introduction. People find the searching and matching code difficult to understand. And indeed, the details are hard. However, the basic structures are not so complex. First, there's a hard question with a simple answer. What about Mule? The answer here is that it turns out that Mule characters can be matched byte by byte, so neither the search code nor the regular expression code need take much notice of it at all! Of course, we add some special features (such as regular expressions that match only certain charsets), but these do not require new concepts. The main exception is that wild-card matches in Mule have to be careful to swallow whole characters. This is handled using the same basic macros that are used for buffer and string movements. This will also be true if a UTF-8 representation is used for the internal encoding. The complex algorithms for searching are for simple string searches. In particular, the algorithm used for fast string searching is Boyer-Moore. This algorithm is based on the idea that if you have a mismatch at a given position, you can precompute where to restart the search. This typically means that you can often make many fewer than N character comparisons, where N is the position at which the match is found, or the size of the text if it contains no match. That's fast! But it's not easy. You must ``compile'' the search string into a jump table. See the source, @file{search.c}, for more information. Emacs changes the basic algorithms somewhat in order to handle case-insensitive searches without a full-blown regular expression. Regular expressions, on the other hand, have a trivial search implementation: try a match at each position. (Under POSIX rules, it's a bit more complex, because POSIX requires that you find the @emph{longest} match in the text. This means you keep a record of the best match so far, and find all the matches.) The matching code for regular expressions is quite complex. First, the regular expression itself is compiled. There are two basic approaches that could be taken. The first is to compile the expression into tables to drive a generic finite automaton emulator. This is the approach given in many textbooks (Sedgewick's @emph{Algorithms} and Aho, Sethi, and Ullmann's @emph{Compilers: Principles, Techniques, and Tools}, aka ``The Dragon Book'') as well as being used by the @file{lex} family of lexical analysis engines. Emacs uses a somewhat different technique. The expression is compiled into a form of bytecode, which is interpreted by a special interpreter. The interpreter itself basically amounts to an inline implementation of the finite automaton emulator. The advantage of this technique is that it's easier to add special features, such as control of case-sensitivity via a global variable. The compiler is not treated here. See the source, @file{regex.c}. The interpreter, although it is divided into several functions, and looks fearsomely complex, is actually quite simple in concept. However, basically what you're doing there is a strcmp on steroids, right? @example int strcmp (char *p, /* pattern pointer */ char *b) /* buffer pointer */ @{ while (*p++ == *b++) ; return *(--p) - *(--b); /* oops, we overshot */ @} @end example Really, it's no harder than that. (A bit of a white lie, OK?) How does the regexp code generalize this? @enumerate @item Depending on the pattern, @code{*b} may have a general relationship to @code{*p}. @emph{I.e.}, direct comparison against @code{*p} is generalized to include checks for set membership, and context dependent properties. This depends on @code{&*b}. Of course that's meaningless in C, so we use @code{b} directly, instead. @item Although to ensure the algorithm terminates, @code{b} must advance step by step, @code{p} can branch and jump. @item The information returned is much greater, including information about subexpressions. @end enumerate We'll ignore (3). (2) is mostly interesting when compiling the regular expression. Now we have @example @group enum operator_t @{ accept = 0, exact, any, range, group, /* actually, these are probably */ repeat, /* turned into conditional code */ /* etc */ @}; @end group @group enum status_t @{ working = 0, matched, mismatch, end_of_buffer, error @}; @end group @group struct pattern @{ enum operator_t operator; char char_value; boolean range_table[256]; /* etc, etc */ @}; @end group @group char *p, /* pattern pointer */ *b; /* buffer pointer */ enum status_t match (struct pattern *p, char *b) @{ enum status_t done = working; while (!(done = match_1_operator (p, b))) @{ struct pattern *p1 = p; p = next_p (p, b); b = next_b (p1, b); @} return done; @} @end group @end example This format exposes the underlying finite automaton. All of them have the following structure, except that the @samp{next_*} functions decide where to jump (for @samp{p}) and whether or not to increment (for @samp{b}), rather than checking for satisfaction of a matching condition. @example enum status_t match_1_operator (pattern *p, char *b) @{ if (! *b) return end_of_buffer; switch (p->operator) @{ case accept: return matched; case exact: if (*b != p->char_value) return mismatch; else break; case any: break; case range: /* range_table is computed in the regexp_compile function */ if (! p->range_table[*b]) return mismatch; /* etc, etc */ @} return working; @} @end example Grouping, repetition, and alternation are handled by compiling the subexpression and calling @code{match (p->subpattern, b)} recursively. In terms of reading the actual code, there are five optimizations (obfuscations, if you like) that have been done. @enumerate @item An explicit "failure stack" has been substituted for recursion. @item The @code{match_1_operator}, @code{next_p}, and @code{next_b} functions are actually inlined into the @code{match} function for efficiency. Then the pointer movement is interspersed with the matching operations. @item If the operator uses buffer context, the buffer pointer movement is sometimes implicit in the operations retrieving the context. @item Some cases are combined into short preparation for individual cases, and a "fall-through" into combined code for several cases. @item The @code{pattern} type is not an explicit @samp{struct}. Instead, the data (including, @emph{e.g.}, @samp{range_table}) is inlined into the compiled bytecode. This leads to bizarre code in the interpreter like @example case range: p += *(p + 1); break; @end example in @code{next_p}, because the compiled pattern is laid out @example ..., 'range', count, first_8_flags, second_8_flags, ..., next_op, ... @end example @end enumerate But if you keep your eye on the "switch in a loop" structure, you should be able to understand the parts you need. @node Multilingual Support, Consoles; Devices; Frames; Windows, Text, Top @chapter Multilingual Support @cindex Mule character sets and encodings @cindex character sets and encodings, Mule @cindex encodings, Mule character sets and @emph{NOTE}: There is a great deal of overlapping and redundant information in this chapter. Ben wrote introductions to Mule issues a number of times, each time not realizing that he had already written another introduction previously. Hopefully, in time these will all be integrated. @emph{NOTE}: The information at the top of the source file @file{text.c} is more complete than the following, and there is also a list of all other places to look for text/I18N-related info. Also look in @file{text.h} for info about the DFC and Eistring API's. Recall that there are two primary ways that text is represented in XEmacs. The @dfn{buffer} representation sees the text as a series of bytes (Ibytes), with a variable number of bytes used per character. The @dfn{character} representation sees the text as a series of integers (Ichars), one per character. The character representation is a cleaner representation from a theoretical standpoint, and is thus used in many cases when lots of manipulations on a string need to be done. However, the buffer representation is the standard representation used in both Lisp strings and buffers, and because of this, it is the ``default'' representation that text comes in. The reason for using this representation is that it's compact and is compatible with ASCII. @menu * Introduction to Multilingual Issues #1:: * Introduction to Multilingual Issues #2:: * Introduction to Multilingual Issues #3:: * Introduction to Multilingual Issues #4:: * Character Sets:: * Encodings:: * Internal Mule Encodings:: * Byte/Character Types; Buffer Positions; Other Typedefs:: * Internal Text API's:: * Coding for Mule:: * CCL:: * Microsoft Windows-Related Multilingual Issues:: * Modules for Internationalization:: @end menu @node Introduction to Multilingual Issues #1, Introduction to Multilingual Issues #2, Multilingual Support, Multilingual Support @section Introduction to Multilingual Issues #1 @cindex introduction to multilingual issues #1 There is an introduction to these issues in the Lisp Reference manual. @xref{Internationalization Terminology,,, lispref, XEmacs Lisp Reference Manual}. Among other documentation that may be of interest to internals programmers is ISO-2022 (@pxref{ISO 2022,,, lispref, XEmacs Lisp Reference Manual}) and CCL (@pxref{CCL,,, lispref, XEmacs Lisp Reference Manual}) @node Introduction to Multilingual Issues #2, Introduction to Multilingual Issues #3, Introduction to Multilingual Issues #1, Multilingual Support @section Introduction to Multilingual Issues #2 @cindex introduction to multilingual issues #2 @subheading Introduction This document covers a number of design issues, problems and proposals with regards to XEmacs MULE. At first we present some definitions and some aspects of the design that have been agreed upon. Then we present some issues and problems that need to be addressed, and then I include a proposal of mine to address some of these issues. When there are other proposals, for example from Olivier, these will be appended to the end of this document. @subheading Definitions and Design Basics First, @dfn{text} is defined to be a series of characters which together defines an utterance or partial utterance in some language. Generally, this language is a human language, but it may also be a computer language if the computer language uses a representation close enough to that of human languages for it to also make sense to call its representation text. Text is opposed to @dfn{binary}, which is a sequence of bytes, representing machine-readable but not human-readable data. A @dfn{byte} is merely a number within a predefined range, which nowadays is nearly always zero to 255. A @dfn{character} is a unit of text. What makes one character different from another is not always clear-cut. It is generally related to the appearance of the character, although perhaps not any possible appearance of that character, but some sort of ideal appearance that is assigned to a character. Whether two characters that look very similar are actually the same depends on various factors such as political ones, such as whether the characters are used to mean similar sorts of things, or behave similarly in similar contexts. In any case, it is not always clearly defined whether two characters are actually the same or not. In practice, however, this is more or less agreed upon. A @dfn{character set} is just that, a set of one or more characters. The set is unique in that there will not be more than one instance of the same character in a character set, and logically is unordered, although an order is often imposed or suggested for the characters in the character set. We can also define an @dfn{order} on a character set, which is a way of assigning a unique number, or possibly a pair of numbers, or a triplet of numbers, or even a set of four or more numbers to each character in the character set. The combination of an order in the character set results in an @dfn{ordered character set}. In an ordered character set, there is an upper limit and a lower limit on the possible values that a character, or that any number within the set of numbers assigned to a character, can take. However, the lower limit does not have to start at zero or one, or anywhere else in particular, nor does the upper limit have to end anywhere particular, and there may be gaps within these ranges such that particular numbers or sets of numbers do not have a corresponding character, even though they are within the upper and lower limits. For example, @dfn{ASCII} defines a very standard ordered character set. It is normally defined to be 94 characters in the range 33 through 126 inclusive on both ends, with every possible character within this range being actually present in the character set. Sometimes the ASCII character set is extended to include what are called @dfn{non-printing characters}. Non-printing characters are characters which instead of really being displayed in a more or less rectangular block, like all other characters, instead indicate certain functions typically related to either control of the display upon which the characters are being displayed, or have some effect on a communications channel that may be currently open and transmitting characters, or may change the meaning of future characters as they are being decoded, or some other similar function. You might say that non-printing characters are somewhat of a hack because they are a special exception to the standard concept of a character as being a printed glyph that has some direct correspondence in the non-computer world. With non-printing characters in mind, the 94-character ordered character set called ASCII is often extended into a 96-character ordered character set, also often called ASCII, which includes in addition to the 94 characters already mentioned, two non-printing characters, one called space and assigned the number 32, just below the bottom of the previous range, and another called @dfn{delete} or @dfn{rubout}, which is given number 127 just above the end of the previous range. Thus to reiterate, the result is a 96-character ordered character set, whose characters take the values from 32 to 127 inclusive. Sometimes ASCII is further extended to contain 32 more non-printing characters, which are given the numbers zero through 31 so that the result is a 128-character ordered character set with characters numbered zero through 127, and with many non-printing characters. Another way to look at this, and the way that is normally taken by XEmacs MULE, is that the characters that would be in the range 30 through 31 in the most extended definition of ASCII, instead form their own ordered character set, which is called @dfn{control zero}, and consists of 32 characters in the range zero through 31. A similar ordered character set called @dfn{control one} is also created, and it contains 32 more non-printing characters in the range 128 through 159. Note that none of these three ordered character sets overlaps in any of the numbers they are assigned to their characters, so they can all be used at once. Note further that the same character can occur in more than one character set. This was shown above, for example, in two different ordered character sets we defined, one of which we could have called @dfn{ASCII}, and the other @dfn{ASCII-extended}, to show that it had extended by two non-printable characters. Most of the characters in these two character sets are shared and present in both of them. Note that there is no restriction on the size of the character set, or on the numbers that are assigned to characters in an ordered character set. It is often extremely useful to represent a sequence of characters as a sequence of bytes, where a byte as defined above is a number in the range zero to 255. An @dfn{encoding} does precisely this. It is simply a mapping from a sequence of characters, possibly augmented with information indicating the character set that each of these characters belongs to, to a sequence of bytes which represents that sequence of characters and no other -- which is to say the mapping is reversible. A @dfn{coding system} is a set of rules for encoding a sequence of characters augmented with character set information into a sequence of bytes, and later performing the reverse operation. It is frequently possible to group coding systems into classes or types based on common features. Typically, for example, a particular coding system class may contain a base coding system which specifies some of the rules, but leaves the rest unspecified. Individual members of the coding system class are formed by starting with the base coding system, and augmenting it with additional rules to produce a particular coding system, what you might think of as a sort of variation within a theme. @subheading XEmacs Specific Definitions First of all, in XEmacs, the concept of character is a little different from the general definition given above. For one thing, the character set that a character belongs to may or may not be an inherent part of the character itself. In other words, the same character occurring in two different character sets may appear in XEmacs as two different characters. This is generally the case now, but we are attempting to move in the other direction. Different proposals may have different ideas about exactly the extent to which this change will be carried out. The general trend, though, is to represent all information about a character other than the character itself, using text properties attached to the character. That way two instances of the same character will look the same to lisp code that merely retrieves the character, and does not also look at the text properties of that character. Everyone involved is in agreement in doing it this way with all Latin characters, and in fact for all characters other than Chinese, Japanese, and Korean ideographs. For those, there may be a difference of opinion. A second difference between the general definition of character and the XEmacs usage of character is that each character is assigned a unique number that distinguishes it from all other characters in the world, or at the very least, from all other characters currently existing anywhere inside the current XEmacs invocation. (If there is a case where the weaker statement applies, but not the stronger statement, it would possibly be with composite characters and any other such characters that are created on the sly.) This unique number is called the @dfn{character representation} of the character, and its particular details are a matter of debate. There is the current standard in use that it is undoubtedly going to change. What has definitely been agreed upon is that it will be an integer, more specifically a positive integer, represented with less than or equal to 31 bits on a 32-bit architecture, and possibly up to 63 bits on a 64-bit architecture, with the proviso that any characters that whose representation would fit in a 64-bit architecture, but not on a 32-bit architecture, would be used only for composite characters, and others that would satisfy the weak uniqueness property mentioned above, but not with the strong uniqueness property. At this point, it is useful to talk about the different representations that a sequence of characters can take. The simplest representation is simply as a sequence of characters, and this is called the @dfn{Lisp representation} of text, because it is the representation that Lisp programs see. Other representations include the external representation, which refers to any encoding of the sequence of characters, using the definition of encoding mentioned above. Typically, text in the external representation is used outside of XEmacs, for example in files, e-mail messages, web sites, and the like. Another representation for a sequence of characters is what I will call the @dfn{byte representation}, and it represents the way that XEmacs internally represents text in a buffer, or in a string. Potentially, the representation could be different between a buffer and a string, and then the terms @dfn{buffer byte representation} and @dfn{string byte representation} would be used, but in practice I don't think this will occur. It will be possible, of course, for buffers and strings, or particular buffers and particular strings, to contain different sub-representations of a single representation. For example, Olivier's 1-2-4 proposal allows for three sub-representations of his internal byte representation, allowing for 1 byte, 2 bytes, and 4 byte width characters respectively. A particular string may be in one sub-representation, and a particular buffer in another sub-representation, but overall both are following the same byte representation. I do not use the term @dfn{internal representation} here, as many people have, because it is potentially ambiguous. Another representation is called the @dfn{array of characters representation}. This is a representation on the C-level in which the sequence of text is represented, not using the byte representation, but by using an array of characters, each represented using the character representation. This sort of representation is often used by redisplay because it is more convenient to work with than any of the other internal representations. The term @dfn{binary representation} may also be heard. Binary representation is used to represent binary data. When binary data is represented in the lisp representation, an equivalence is simply set up between bytes zero through 255, and characters zero through 255. These characters come from four character sets, which are from bottom to top, control zero, ASCII, control 1, and Latin 1. Together, they comprise 256 characters, and are a good mapping for the 256 possible bytes in a binary representation. Binary representation could also be used to refer to an external representation of the binary data, which is a simple direct byte-to-byte representation. No internal representation should ever be referred to as a binary representation because of ambiguity. The terms character set/encoding system were defined generally, above. In XEmacs, the equivalent concepts exist, although character set has been shortened to charset, and in fact represents specifically an ordered character set. For each possible charset, and for each possible coding system, there is an associated object in XEmacs. These objects will be of type charset and coding system, respectively. Charsets and coding systems are divided into classes, or @dfn{types}, the normal term under XEmacs, and all possible charsets encoding systems that may be defined must be in one of these types. If you need to create a charset or coding system that is not one of these types, you will have to modify the C code to support this new type. Some of the existing or soon-to-be-created types are, or will be, generic enough so that this shouldn't be an issue. Note also that the byte encoding for text and the character coding of a character are closely related. You might say that ideally each is the simplest equivalent of the other given the general constraints on each representation. To be specific, in the current MULE representation, @enumerate @item Characters encode both the character itself and the character set that it comes from. These character sets are always assumed to be representable as an ordered character set of size 96 or of size 96 by 96, or the trivially-related sizes 94 and 94 by 94. The only allowable exceptions are the control zero and control one character sets, which are of size 32. Character sets which do not naturally have a compatible ordering such as this are shoehorned into an ordered character set, or possibly two ordered character sets of a compatible size. @item The variable width byte representation was deliberately chosen to allow scanning text forwards and backwards efficiently. This necessitated defining the possible bytes into three ranges which we shall call A, B, and C. Range A is used exclusively for single-byte characters, which is to say characters that are representing using only one contiguous byte. Multi-byte characters are always represented by using one byte from Range B, followed by one or more bytes from Range C. What this means is that bytes that begin a character are unequivocally distinguished from bytes that do not begin a character, and therefore there is never a problem scaling backwards and finding the beginning of a character. Know that UTF8 adopts a proposal that is very similar in spirit in that it uses separate ranges for the first byte of a multi byte sequence, and the following bytes in multi-byte sequence. @item Given the fact that all ordered character sets allowed were essentially 96 characters per dimension, it made perfect sense to make Range C comprise 96 bytes. With a little more tweaking, the currently-standard MULE byte representation was created, and was drafted from this. @item The MULE byte representation defined four basic representations for characters, which would take up from one to four bytes, respectively. The MULE character representation thus had the following constraints: @enumerate @item Character numbers zero through 255 should represent the characters that binary values zero through 255 would be mapped onto. (Note: this was not the case in Kenichi Handa's version of this representation, but I changed it.) @item The four sub-classes of representation in the MULE byte representation should correspond to four contiguous non-overlapping ranges of characters. @item The algorithmic conversion between the single character represented in the byte representation and in the character representation should be as easy as possible. @item Given the previous constraints, the character representation should be as compact as possible, which is to say it should use the least number of bits possible. @end enumerate @end enumerate So you see that the entire structure of the byte and character representations stemmed from a very small number of basic choices, which were @enumerate @item the choice to encode character set information in a character @item the choice to assume that all character sets would have an order imposed upon them with 96 characters per one or two dimensions. (This is less arbitrary than it seems--it follows ISO-2022) @item the choice to use a variable width byte representation. @end enumerate What this means is that you cannot really separate the byte representation, the character representation, and the assumptions made about characters and whether they represent character sets from each other. All of these are closely intertwined, and for purposes of simplicity, they should be designed together. If you change one representation without changing another, you are in essence creating a completely new design with its own attendant problems--since your new design is likely to be quite complex and not very coherent with regards to the translation between the character and byte representations, you are likely to run into problems. @node Introduction to Multilingual Issues #3, Introduction to Multilingual Issues #4, Introduction to Multilingual Issues #2, Multilingual Support @section Introduction to Multilingual Issues #3 @cindex introduction to multilingual issues #3 In XEmacs, Mule is a code word for the support for input handling and display of multi-lingual text. This section provides an overview of how this support impacts the C and Lisp code in XEmacs. It is important for anyone who works on the C or the Lisp code, especially on the C code, to be aware of these issues, even if they don't work directly on code that implements multi-lingual features, because there are various general procedures that need to be followed in order to write Mule-compliant code. (The specifics of these procedures are documented elsewhere in this manual.) There are four primary aspects of Mule support: @enumerate @item internal handling and representation of multi-lingual text. @item conversion between the internal representation of text and the various external representations in which multi-lingual text is encoded, such as Unicode representations (including mostly fixed width encodings such as UCS-2/UTF-16 and UCS-4 and variable width ASCII conformant encodings, such as UTF-7 and UTF-8); the various ISO2022 representations, which typically use escape sequences to switch between different character sets (such as Compound Text, used under X Windows; JIS, used specifically for encoding Japanese; and EUC, a non-modal encoding used for Japanese, Korean, and certain other languages); Microsoft's multi-byte encodings (such as Shift-JIS); various simple encodings for particular 8-bit character sets (such as Latin-1 and Latin-2, and encodings (such as koi8 and Alternativny) for Cyrillic); and others. This conversion needs to happen both for text in files and text sent to or retrieved from system API calls. It even needs to happen for external binary data because the internal representation does not represent binary data simply as a sequence of bytes as it is represented externally. @item Proper display of multi-lingual characters. @item Input of multi-lingual text using the keyboard. @end enumerate These four aspects are for the most part independent of each other. @subheading Characters, Character Sets, and Encodings A @dfn{character} (which is, BTW, a surprisingly complex concept) is, in a written representation of text, the most basic written unit that has a meaning of its own. It's comparable to a phoneme when analyzing words in spoken speech (for example, the sound of @samp{t} in English, which in fact has different pronunciations in different words -- aspirated in @samp{time}, unaspirated in @samp{stop}, unreleased or even pronounced as a glottal stop in @samp{button}, etc. -- but logically is a single concept). Like a phoneme, a character is an abstract concept defined by its @emph{meaning}. The character @samp{lowercase f}, for example, can always be used to represent the first letter in the word @samp{fill}, regardless of whether it's drawn upright or italic, whether the @samp{fi} combination is drawn as a single ligature, whether there are serifs on the bottom of the vertical stroke, etc. (These different appearances of a single character are often called @dfn{graphs} or @dfn{glyphs}.) Our concern when representing text is on representing the abstract characters, and not on their exact appearance. A @dfn{character set} (or @dfn{charset}), as we define it, is a set of characters, each with an associated number (or set of numbers -- see below), called a @dfn{code point}. It's important to understand that a character is not defined by any number attached to it, but by its meaning. For example, ASCII and EBCDIC are two charsets containing exactly the same characters (lowercase and uppercase letters, numbers 0 through 9, particular punctuation marks) but with different numberings. The `comma' character in ASCII and EBCDIC, for instance, is the same character despite having a different numbering. Conversely, when comparing ASCII and JIS-Roman, which look the same except that the latter has a yen sign substituted for the backslash, we would say that the backslash and yen sign are @strong{not} the same characters, despite having the same number (95) and despite the fact that all other characters are present in both charsets, with the same numbering. ASCII and JIS-Roman, then, do @emph{not} have exactly the same characters in them (ASCII has a backslash character but no yen-sign character, and vice-versa for JIS-Roman), unlike ASCII and EBCDIC, even though the numberings in ASCII and JIS-Roman are closer. It's also important to distinguish between charsets and encodings. For a simple charset like ASCII, there is only one encoding normally used -- each character is represented by a single byte, with the same value as its code point. For more complicated charsets, however, things are not so obvious. Unicode version 2, for example, is a large charset with thousands of characters, each indexed by a 16-bit number, often represented in hex, e.g. 0x05D0 for the Hebrew letter "aleph". One obvious encoding uses two bytes per character (actually two encodings, depending on which of the two possible byte orderings is chosen). This encoding is convenient for internal processing of Unicode text; however, it's incompatible with ASCII, so a different encoding, e.g. UTF-8, is usually used for external text, for example files or e-mail. UTF-8 represents Unicode characters with one to three bytes (often extended to six bytes to handle characters with up to 31-bit indices). Unicode characters 00 to 7F (identical with ASCII) are directly represented with one byte, and other characters with two or more bytes, each in the range 80 to FF. In general, a single encoding may be able to represent more than one charset. @subheading Internal Representation of Text In an ASCII or single-European-character-set world, life is very simple. There are 256 characters, and each character is represented using the numbers 0 through 255, which fit into a single byte. With a few exceptions (such as case-changing operations or syntax classes like 'whitespace'), "text" is simply an array of indices into a font. You can get different languages simply by choosing fonts with different 8-bit character sets (ISO-8859-1, -2, special-symbol fonts, etc.), and everything will "just work" as long as anyone else receiving your text uses a compatible font. In the multi-lingual world, however, it is much more complicated. There are a great number of different characters which are organized in a complex fashion into various character sets. The representation to use is not obvious because there are issues of size versus speed to consider. In fact, there are in general two kinds of representations to work with: one that represents a single character using an integer (possibly a byte), and the other representing a single character as a sequence of bytes. The former representation is normally called fixed width, and the other variable width. Both representations represent exactly the same characters, and the conversion from one representation to the other is governed by a specific formula (rather than by table lookup) but it may not be simple. Most C code need not, and in fact should not, know the specifics of exactly how the representations work. In fact, the code must not make assumptions about the representations. This means in particular that it must use the proper macros for retrieving the character at a particular memory location, determining how many characters are present in a particular stretch of text, and incrementing a pointer to a particular character to point to the following character, and so on. It must not assume that one character is stored using one byte, or even using any particular number of bytes. It must not assume that the number of characters in a stretch of text bears any particular relation to a number of bytes in that stretch. It must not assume that the character at a particular memory location can be retrieved simply by dereferencing the memory location, even if a character is known to be ASCII or is being compared with an ASCII character, etc. Careful coding is required to be Mule clean. The biggest work of adding Mule support, in fact, is converting all of the existing code to be Mule clean. Lisp code is mostly unaffected by these concerns. Text in strings and buffers appears simply as a sequence of characters regardless of whether Mule support is present. The biggest difference with older versions of Emacs, as well as current versions of GNU Emacs, is that integers and characters are no longer equivalent, but are separate Lisp Object types. @subheading Conversion Between Internal and External Representations All text needs to be converted to an external representation before being sent to a function or file, and all text retrieved from a function of file needs to be converted to the internal representation. This conversion needs to happen as close to the source or destination of the text as possible. No operations should ever be performed on text encoded in an external representation other than simple copying, because no assumptions can reliably be made about the format of this text. You cannot assume, for example, that the end of text is terminated by a null byte. (For example, if the text is Unicode, it will have many null bytes in it.) You cannot find the next "slash" character by searching through the bytes until you find a byte that looks like a "slash" character, because it might actually be the second byte of a Kanji character. Furthermore, all text in the internal representation must be converted, even if it is known to be completely ASCII, because the external representation may not be ASCII compatible (for example, if it is Unicode). The place where C code needs to be the most careful is when calling external API functions. It is easy to forget that all text passed to or retrieved from these functions needs to be converted. This includes text in structures passed to or retrieved from these functions and all text that is passed to a callback function that is called by the system. Macros are provided to perform conversions to or from external text. These macros are called TO_EXTERNAL_FORMAT and TO_INTERNAL_FORMAT respectively. These macros accept input in various forms, for example, Lisp strings, buffers, lstreams, raw data, and can return data in multiple formats, including both @code{malloc()}ed and @code{alloca()}ed data. The use of @code{alloca()}ed data here is particularly important because, in general, the returned data will not be used after making the API call, and as a result, using @code{alloca()}ed data provides a very cheap and easy to use method of allocation. These macros take a coding system argument which indicates the nature of the external encoding. A coding system is an object that encapsulates the structures of a particular external encoding and the methods required to convert to and from this encoding. A facility exists to create coding system aliases, which in essence gives a single coding system two different names. It is effectively used in XEmacs to provide a layer of abstraction on top of the actual coding systems. For example, the coding system alias "file-name" points to whichever coding system is currently used for encoding and decoding file names as passed to or retrieved from system calls. In general, the actual encoding will differ from system to system, and also on the particular locale that the user is in. The use of the file-name alias effectively hides that implementation detail on top of that abstract interface layer which provides a unified set of coding systems which are consistent across all operating environments. The choice of which coding system to use in a particular conversion macro requires some thought. In general, you should choose a lower-level actual coding system when the very design of the APIs you are working with call for that particular coding system. In all other cases, you should find the least general abstract coding system (i.e. coding system alias) that applies to your specific situation. Only use the most general coding systems, such as native, when there is simply nothing else that is more appropriate. By doing things this way, you allow the user more control over how the encoding actually works, because the user is free to map the abstracted coding system names onto to different actual coding systems. Some common coding systems are: @table @code @item ctext Compound Text, which is the standard encoding under X Windows, which is used for clipboard data and possibly other data. (ctext is a coding system of type ISO2022.) @item mswindows-unicode this is used for representing text passed to MS Window API calls with arguments that need to be in Unicode format. (mswindows-unicode is a coding system of type UTF-16) @item ms-windows-multi-byte this is used for representing text passed to MS Windows API calls with arguments that need to be in multi-byte format. Note that there are very few if any examples of such calls. @item mswindows-tstr this is used for representing text passed to any MS Windows API calls that declare their argument as LPTSTR, or LPCTSTR. This is the vast majority of system calls and automatically translates either to mswindows-unicode or mswindows-multi-byte, depending on the presence or absence of the UNICODE preprocessor constant. (If we compile XEmacs with this preprocessor constant, then all API calls use Unicode for all text passed to or received from these API calls.) @item terminal used for text sent to or read from a text terminal in the absence of a more specific coding system (calls to window-system specific APIs should use the appropriate window-specific coding system if it makes sense to do so.) @item file-name used when specifying the names of files in the absence of a more specific encoding, such as ms-windows-tstr. @item native the most general coding system for specifying text passed to system calls. This generally translates to whatever coding system is specified by the current locale. This should only be used when none of the coding systems mentioned above are appropriate. @end table @subheading Proper Display of Multilingual Text There are two things required to get this working correctly. One is selecting the correct font, and the other is encoding the text according to the encoding used for that specific font, or the window-system specific text display API. Generally each separate character set has a different font associated with it, which is specified by name and each font has an associated encoding into which the characters must be translated. (this is the case on X Windows, at least; on Windows there is a more general mechanism). Both the specific font for a charset and the encoding of that font are system dependent. Currently there is a way of specifying these two properties under X Windows (using the registry and ccl properties of a character set) but not for other window systems. A more general system needs to be implemented to allow these characteristics to be specified for all Windows systems. Another issue is making sure that the necessary fonts for displaying various character sets are installed on the system. Currently, XEmacs provides, on its web site, X Windows fonts for a number of different character sets that can be installed by users. This isn't done yet for Windows, but it should be. @subheading Inputting of Multilingual Text This is a rather complicated issue because there are many paradigms defined for inputting multi-lingual text, some of which are specific to particular languages, and any particular language may have many different paradigms defined for inputting its text. These paradigms are encoded in input methods and there is a standard API for defining an input method in XEmacs called LEIM, or Library of Emacs Input Methods. Some of these input methods are written entirely in Elisp, and thus are system-independent, while others require the aid either of an external process, or of C level support that ties into a particular system-specific input method API, for example, XIM under X Windows, or the active keyboard layout and IME support under Windows. Currently, there is no support for any system-specific input methods under Microsoft Windows, although this will change. @node Introduction to Multilingual Issues #4, Character Sets, Introduction to Multilingual Issues #3, Multilingual Support @section Introduction to Multilingual Issues #4 @cindex introduction to multilingual issues #4 The rest of the sections in this chapter consist of yet another introduction to multilingual issues, duplicating the information in the previous sections. @node Character Sets, Encodings, Introduction to Multilingual Issues #4, Multilingual Support @section Character Sets @cindex character sets A @dfn{character set} (or @dfn{charset}) is an ordered set of characters. A particular character in a charset is indexed using one or more @dfn{position codes}, which are non-negative integers. The number of position codes needed to identify a particular character in a charset is called the @dfn{dimension} of the charset. In XEmacs/Mule, all charsets have dimension 1 or 2, and the size of all charsets (except for a few special cases) is either 94, 96, 94 by 94, or 96 by 96. The range of position codes used to index characters from any of these types of character sets is as follows: @example Charset type Position code 1 Position code 2 ------------------------------------------------------------ 94 33 - 126 N/A 96 32 - 127 N/A 94x94 33 - 126 33 - 126 96x96 32 - 127 32 - 127 @end example Note that in the above cases position codes do not start at an expected value such as 0 or 1. The reason for this will become clear later. For example, Latin-1 is a 96-character charset, and JISX0208 (the Japanese national character set) is a 94x94-character charset. [Note that, although the ranges above define the @emph{valid} position codes for a charset, some of the slots in a particular charset may in fact be empty. This is the case for JISX0208, for example, where (e.g.) all the slots whose first position code is in the range 118 - 127 are empty.] There are three charsets that do not follow the above rules. All of them have one dimension, and have ranges of position codes as follows: @example Charset name Position code 1 ------------------------------------ ASCII 0 - 127 Control-1 0 - 31 Composite 0 - some large number @end example (The upper bound of the position code for composite characters has not yet been determined, but it will probably be at least 16,383). ASCII is the union of two subsidiary character sets: Printing-ASCII (the printing ASCII character set, consisting of position codes 33 - 126, like for a standard 94-character charset) and Control-ASCII (the non-printing characters that would appear in a binary file with codes 0 - 32 and 127). Control-1 contains the non-printing characters that would appear in a binary file with codes 128 - 159. Composite contains characters that are generated by overstriking one or more characters from other charsets. Note that some characters in ASCII, and all characters in Control-1, are @dfn{control} (non-printing) characters. These have no printed representation but instead control some other function of the printing (e.g. TAB or 8 moves the current character position to the next tab stop). All other characters in all charsets are @dfn{graphic} (printing) characters. When a binary file is read in, the bytes in the file are assigned to character sets as follows: @example Bytes Character set Range -------------------------------------------------- 0 - 127 ASCII 0 - 127 128 - 159 Control-1 0 - 31 160 - 255 Latin-1 32 - 127 @end example This is a bit ad-hoc but gets the job done. @node Encodings, Internal Mule Encodings, Character Sets, Multilingual Support @section Encodings @cindex encodings, Mule @cindex Mule encodings An @dfn{encoding} is a way of numerically representing characters from one or more character sets. If an encoding only encompasses one character set, then the position codes for the characters in that character set could be used directly. This is not possible, however, if more than one character set is to be used in the encoding. For example, the conversion detailed above between bytes in a binary file and characters is effectively an encoding that encompasses the three character sets ASCII, Control-1, and Latin-1 in a stream of 8-bit bytes. Thus, an encoding can be viewed as a way of encoding characters from a specified group of character sets using a stream of bytes, each of which contains a fixed number of bits (but not necessarily 8, as in the common usage of ``byte''). Here are descriptions of a couple of common encodings: @menu * Japanese EUC (Extended Unix Code):: * JIS7:: @end menu @node Japanese EUC (Extended Unix Code), JIS7, Encodings, Encodings @subsection Japanese EUC (Extended Unix Code) @cindex Japanese EUC (Extended Unix Code) @cindex EUC (Extended Unix Code), Japanese @cindex Extended Unix Code, Japanese EUC This encompasses the character sets Printing-ASCII, Katakana-JISX0201 (half-width katakana, the right half of JISX0201), Japanese-JISX0208, and Japanese-JISX0212. Note that Printing-ASCII and Katakana-JISX0201 are 94-character charsets, while Japanese-JISX0208 and Japanese-JISX0212 are 94x94-character charsets. The encoding is as follows: @example Character set Representation (PC=position-code) ------------- -------------- Printing-ASCII PC1 Katakana-JISX0201 0x8E | PC1 + 0x80 Japanese-JISX0208 PC1 + 0x80 | PC2 + 0x80 Japanese-JISX0212 PC1 + 0x80 | PC2 + 0x80 @end example Note that there are other versions of EUC for other Asian languages. EUC in general is characterized by @enumerate @item row-column encoding, @item big-endian (row-first) ordering, and @item ASCII compatibility in variable width forms. @end enumerate @node JIS7, , Japanese EUC (Extended Unix Code), Encodings @subsection JIS7 @cindex JIS7 This encompasses the character sets Printing-ASCII, Latin-JISX0201 (the left half of JISX0201; this character set is very similar to Printing-ASCII and is a 94-character charset), Japanese-JISX0208, and Katakana-JISX0201. It uses 7-bit bytes. Unlike EUC, this is a @dfn{modal} encoding, which means that there are multiple states that the encoding can be in, which affect how the bytes are to be interpreted. Special sequences of bytes (called @dfn{escape sequences}) are used to change states. The encoding is as follows: @example Character set Representation (PC=position-code) ------------- -------------- Printing-ASCII PC1 Latin-JISX0201 PC1 Katakana-JISX0201 PC1 Japanese-JISX0208 PC1 | PC2 Escape sequence ASCII equivalent Meaning --------------- ---------------- ------- 0x1B 0x28 0x4A ESC ( J invoke Latin-JISX0201 0x1B 0x28 0x49 ESC ( I invoke Katakana-JISX0201 0x1B 0x24 0x42 ESC $ B invoke Japanese-JISX0208 0x1B 0x28 0x42 ESC ( B invoke Printing-ASCII @end example Initially, Printing-ASCII is invoked. @node Internal Mule Encodings, Byte/Character Types; Buffer Positions; Other Typedefs, Encodings, Multilingual Support @section Internal Mule Encodings @cindex internal Mule encodings @cindex Mule encodings, internal @cindex encodings, internal Mule In XEmacs/Mule, each character set is assigned a unique number, called a @dfn{leading byte}. This is used in the encodings of a character. Leading bytes are in the range 0x80 - 0xFF (except for ASCII, which has a leading byte of 0), although some leading bytes are reserved. Charsets whose leading byte is in the range 0x80 - 0x9F are called @dfn{official} and are used for built-in charsets. Other charsets are called @dfn{private} and have leading bytes in the range 0xA0 - 0xFF; these are user-defined charsets. More specifically: @example Character set Leading byte ------------- ------------ ASCII 0 (0x7F in arrays indexed by leading byte) Composite 0x8D Dimension-1 Official 0x80 - 0x8C/0x8D (0x8E is free) Control 0x8F Dimension-2 Official 0x90 - 0x99 (0x9A - 0x9D are free) Dimension-1 Private Marker 0x9E Dimension-2 Private Marker 0x9F Dimension-1 Private 0xA0 - 0xEF Dimension-2 Private 0xF0 - 0xFF @end example There are two internal encodings for characters in XEmacs/Mule. One is called @dfn{string encoding} and is an 8-bit encoding that is used for representing characters in a buffer or string. It uses 1 to 4 bytes per character. The other is called @dfn{character encoding} and is a 19-bit encoding that is used for representing characters individually in a variable. (In the following descriptions, we'll ignore composite characters for the moment. We also give a general (structural) overview first, followed later by the exact details.) @menu * Internal String Encoding:: * Internal Character Encoding:: @end menu @node Internal String Encoding, Internal Character Encoding, Internal Mule Encodings, Internal Mule Encodings @subsection Internal String Encoding @cindex internal string encoding @cindex string encoding, internal @cindex encoding, internal string ASCII characters are encoded using their position code directly. Other characters are encoded using their leading byte followed by their position code(s) with the high bit set. Characters in private character sets have their leading byte prefixed with a @dfn{leading byte prefix}, which is either 0x9E or 0x9F. (No character sets are ever assigned these leading bytes.) Specifically: @example Character set Encoding (PC=position-code, LB=leading-byte) ------------- -------- ASCII PC-1 | Control-1 LB | PC1 + 0xA0 | Dimension-1 official LB | PC1 + 0x80 | Dimension-1 private 0x9E | LB | PC1 + 0x80 | Dimension-2 official LB | PC1 + 0x80 | PC2 + 0x80 | Dimension-2 private 0x9F | LB | PC1 + 0x80 | PC2 + 0x80 @end example The basic characteristic of this encoding is that the first byte of all characters is in the range 0x00 - 0x9F, and the second and following bytes of all characters is in the range 0xA0 - 0xFF. This means that it is impossible to get out of sync, or more specifically: @enumerate @item Given any byte position, the beginning of the character it is within can be determined in constant time. @item Given any byte position at the beginning of a character, the beginning of the next character can be determined in constant time. @item Given any byte position at the beginning of a character, the beginning of the previous character can be determined in constant time. @item Textual searches can simply treat encoded strings as if they were encoded in a one-byte-per-character fashion rather than the actual multi-byte encoding. @end enumerate None of the standard non-modal encodings meet all of these conditions. For example, EUC satisfies only (2) and (3), while Shift-JIS and Big5 (not yet described) satisfy only (2). (All non-modal encodings must satisfy (2), in order to be unambiguous.) @node Internal Character Encoding, , Internal String Encoding, Internal Mule Encodings @subsection Internal Character Encoding @cindex internal character encoding @cindex character encoding, internal @cindex encoding, internal character One 19-bit word represents a single character. The word is separated into three fields: @example Bit number: 18 17 16 15 14 13 12 11 10 09 08 07 06 05 04 03 02 01 00 <------------> <------------------> <------------------> Field: 1 2 3 @end example Note that fields 2 and 3 hold 7 bits each, while field 1 holds 5 bits. @example Character set Field 1 Field 2 Field 3 ------------- ------- ------- ------- ASCII 0 0 PC1 range: (00 - 7F) Control-1 0 1 PC1 range: (00 - 1F) Dimension-1 official 0 LB - 0x7F PC1 range: (01 - 0D) (20 - 7F) Dimension-1 private 0 LB - 0x80 PC1 range: (20 - 6F) (20 - 7F) Dimension-2 official LB - 0x8F PC1 PC2 range: (01 - 0A) (20 - 7F) (20 - 7F) Dimension-2 private LB - 0xE1 PC1 PC2 range: (0F - 1E) (20 - 7F) (20 - 7F) Composite 0x1F ? ? @end example Note that character codes 0 - 255 are the same as the ``binary encoding'' described above. Most of the code in XEmacs knows nothing of the representation of a character other than that values 0 - 255 represent ASCII, Control 1, and Latin 1. @strong{WARNING WARNING WARNING}: The Boyer-Moore code in @file{search.c}, and the code in @code{search_buffer()} that determines whether that code can be used, knows that ``field 3'' in a character always corresponds to the last byte in the textual representation of the character. (This is important because the Boyer-Moore algorithm works by looking at the last byte of the search string and &&#### finish this. @node Byte/Character Types; Buffer Positions; Other Typedefs, Internal Text API's, Internal Mule Encodings, Multilingual Support @section Byte/Character Types; Buffer Positions; Other Typedefs @cindex byte/character types; buffer positions; other typedefs @cindex byte/character types @cindex character types @cindex buffer positions @cindex typedefs, other @menu * Byte Types:: * Different Ways of Seeing Internal Text:: * Buffer Positions:: * Other Typedefs:: * Usage of the Various Representations:: * Working With the Various Representations:: @end menu @node Byte Types, Different Ways of Seeing Internal Text, Byte/Character Types; Buffer Positions; Other Typedefs, Byte/Character Types; Buffer Positions; Other Typedefs @subsection Byte Types @cindex byte types Stuff pointed to by a char * or unsigned char * will nearly always be one of the following types: @itemize @minus @item a) [Ibyte] pointer to internally-formatted text @item b) [Extbyte] pointer to text in some external format, which can be defined as all formats other than the internal one @item c) [Ascbyte] pure ASCII text @item d) [Binbyte] binary data that is not meant to be interpreted as text @item e) [Rawbyte] general data in memory, where we don't care about whether it's text or binary @item f) [Boolbyte] a zero or a one @item g) [Bitbyte] a byte used for bit fields @item h) [Chbyte] null-semantics @code{char *}; used when casting an argument to an external API where the the other types may not be appropriate @end itemize Types (b), (c), (f) and (h) are defined as @code{char}, while the others are @code{unsigned char}. This is for maximum safety (signed characters are dangerous to work with) while maintaining as much compatibility with external API's and string constants as possible. We also provide versions of the above types defined with different underlying C types, for API compatibility. These use the following prefixes: @example C = plain char, when the base type is unsigned U = unsigned S = signed @end example (Formerly I had a comment saying that type (e) "should be replaced with void *". However, there are in fact many places where an unsigned char * might be used -- e.g. for ease in pointer computation, since void * doesn't allow this, and for compatibility with external API's.) Note that these typedefs are purely for documentation purposes; from the C code's perspective, they are exactly equivalent to @code{char *}, @code{unsigned char *}, etc., so you can freely use them with library functions declared as such. Using these more specific types rather than the general ones helps avoid the confusions that occur when the semantics of a char * or unsigned char * argument being studied are unclear. Furthermore, by requiring that ALL uses of @code{char} be replaced with some other type as part of the Mule-ization process, we can use a search for @code{char} as a way of finding code that has not been properly Mule-ized yet. @node Different Ways of Seeing Internal Text, Buffer Positions, Byte Types, Byte/Character Types; Buffer Positions; Other Typedefs @subsection Different Ways of Seeing Internal Text @cindex different ways of seeing internal text There are various ways of representing internal text. The two primary ways are as an "array" of individual characters; the other is as a "stream" of bytes. In the ASCII world, where there are only 255 characters at most, things are easy because each character fits into a byte. In general, however, this is not true -- see the above discussion of characters vs. encodings. In some cases, it's also important to distinguish between a stream representation as a series of bytes and as a series of textual units. This is particularly important wrt Unicode. The UTF-16 representation (sometimes referred to, rather sloppily, as simply the "Unicode" format) represents text as a series of 16-bit units. Mostly, each unit corresponds to a single character, but not necessarily, as characters outside of the range 0-65535 (the BMP or "Basic Multilingual Plane" of Unicode) require two 16-bit units, through the mechanism of "surrogates". When a series of 16-bit units is serialized into a byte stream, there are at least two possible representations, little-endian and big-endian, and which one is used may depend on the native format of 16-bit integers in the CPU of the machine that XEmacs is running on. (Similarly, UTF-32 is logically a representation with 32-bit textual units.) Specifically: @itemize @minus @item UTF-8 has 1-byte (8-bit) units. @item UTF-16 has 2-byte (16-bit) units. @item UTF-32 has 4-byte (32-bit) units. @item XEmacs-internal encoding (the old "Mule" encoding) has 1-byte (8-bit) units. @item UTF-7 technically has 7-bit units that are within the "mail-safe" range (ASCII 32 - 126 plus a few control characters), but normally is encoded in an 8-bit stream. (UTF-7 is also a modal encoding, since it has a normal mode where printable ASCII characters represent themselves and a shifted mode, introduced with a plus sign, where a base-64 encoding is used.) @item UTF-5 technically has 7-bit units (normally encoded in an 8-bit stream, like UTF-7), but only uses uppercase A-V and 0-9, and only encodes 4 bits worth of data per character. UTF-5 is meant for encoding Unicode inside of DNS names. @end itemize Thus, we can imagine three levels in the representation of texual data: @example series of characters -> series of textual units -> series of bytes [Ichar] [Itext] [Ibyte] @end example XEmacs has three corresponding typedefs: @itemize @minus @item An Ichar is an integer (at least 32-bit), representing a 31-bit character. @item An Itext is an unsigned value, either 8, 16 or 32 bits, depending on the nature of the internal representation, and corresponding to a single textual unit. @item An Ibyte is an @code{unsigned char}, representing a single byte in a textual byte stream. @end itemize Internal text in stream format can be simultaneously viewed as either @code{Itext *} or @code{Ibyte *}. The @code{Ibyte *} representation is convenient for copying data from one place to another, because such routines usually expect byte counts. However, @code{Itext *} is much better for actually working with the data. From a text-unit perspective, units 0 through 127 will always be ASCII compatible, and data in Lisp strings (and other textual data generated as a whole, e.g. from external conversion) will be followed by a null-unit terminator. From an @code{Ibyte *} perspective, however, the encoding is only ASCII-compatible if it uses 1-byte units. Similarly to the different text representations, three integral count types exist -- Charcount, Textcount and Bytecount. NOTE: Despite the presence of the terminator, internal text itself can have nulls in it! (Null text units, not just the null bytes present in any UTF-16 encoding.) The terminator is present because in many cases internal text is passed to routines that will ultimately pass the text to library functions that cannot handle embedded nulls, e.g. functions manipulating filenames, and it is a real hassle to have to pass the length around constantly. But this can lead to sloppy coding! We need to be careful about watching for nulls in places that are important, e.g. manipulating string objects or passing data to/from the clipboard. @table @code @item Ibyte The data in a buffer or string is logically made up of Ibyte objects, where a Ibyte takes up the same amount of space as a char. (It is declared differently, though, to catch invalid usages.) Strings stored using Ibytes are said to be in "internal format". The important characteristics of internal format are @itemize @minus @item ASCII characters are represented as a single Ibyte, in the range 0 - 0x7f. @item All other characters are represented as a Ibyte in the range 0x80 - 0x9f followed by one or more Ibytes in the range 0xa0 to 0xff. @end itemize This leads to a number of desirable properties: @itemize @minus @item Given the position of the beginning of a character, you can find the beginning of the next or previous character in constant time. @item When searching for a substring or an ASCII character within the string, you need merely use standard searching routines. @end itemize @item Itext #### Document me. @item Ichar This typedef represents a single Emacs character, which can be ASCII, ISO-8859, or some extended character, as would typically be used for Kanji. Note that the representation of a character as an Ichar is @strong{not} the same as the representation of that same character in a string; thus, you cannot do the standard C trick of passing a pointer to a character to a function that expects a string. An Ichar takes up 19 bits of representation and (for code compatibility and such) is compatible with an int. This representation is visible on the Lisp level. The important characteristics of the Ichar representation are @itemize @minus @item values 0x00 - 0x7f represent ASCII. @item values 0x80 - 0xff represent the right half of ISO-8859-1. @item values 0x100 and up represent all other characters. @end itemize This means that Ichar values are upwardly compatible with the standard 8-bit representation of ASCII/ISO-8859-1. @item Extbyte Strings that go in or out of Emacs are in "external format", typedef'ed as an array of char or a char *. There is more than one external format (JIS, EUC, etc.) but they all have similar properties. They are modal encodings, which is to say that the meaning of particular bytes is not fixed but depends on what "mode" the string is currently in (e.g. bytes in the range 0 - 0x7f might be interpreted as ASCII, or as Hiragana, or as 2-byte Kanji, depending on the current mode). The mode starts out in ASCII/ISO-8859-1 and is switched using escape sequences -- for example, in the JIS encoding, 'ESC $ B' switches to a mode where pairs of bytes in the range 0 - 0x7f are interpreted as Kanji characters. External-formatted data is generally desirable for passing data between programs because it is upwardly compatible with standard ASCII/ISO-8859-1 strings and may require less space than internal encodings such as the one described above. In addition, some encodings (e.g. JIS) keep all characters (except the ESC used to switch modes) in the printing ASCII range 0x20 - 0x7e, which results in a much higher probability that the data will avoid being garbled in transmission. Externally-formatted data is generally not very convenient to work with, however, and for this reason is usually converted to internal format before any work is done on the string. NOTE: filenames need to be in external format so that ISO-8859-1 characters come out correctly. @end table @node Buffer Positions, Other Typedefs, Different Ways of Seeing Internal Text, Byte/Character Types; Buffer Positions; Other Typedefs @subsection Buffer Positions @cindex buffer positions There are three possible ways to specify positions in a buffer. All of these are one-based: the beginning of the buffer is position or index 1, and 0 is not a valid position. As a "buffer position" (typedef Charbpos): This is an index specifying an offset in characters from the beginning of the buffer. Note that buffer positions are logically @strong{between} characters, not on a character. The difference between two buffer positions specifies the number of characters between those positions. Buffer positions are the only kind of position externally visible to the user. As a "byte index" (typedef Bytebpos): This is an index over the bytes used to represent the characters in the buffer. If there is no Mule support, this is identical to a buffer position, because each character is represented using one byte. However, with Mule support, many characters require two or more bytes for their representation, and so a byte index may be greater than the corresponding buffer position. As a "memory index" (typedef Membpos): This is the byte index adjusted for the gap. For positions before the gap, this is identical to the byte index. For positions after the gap, this is the byte index plus the gap size. There are two possible memory indices for the gap position; the memory index at the beginning of the gap should always be used, except in code that deals with manipulating the gap, where both indices may be seen. The address of the character "at" (i.e. following) a particular position can be obtained from the formula buffer_start_address + memory_index(position) - 1 except in the case of characters at the gap position. @node Other Typedefs, Usage of the Various Representations, Buffer Positions, Byte/Character Types; Buffer Positions; Other Typedefs @subsection Other Typedefs @cindex other typedefs Charcount: ---------- This typedef represents a count of characters, such as a character offset into a string or the number of characters between two positions in a buffer. The difference between two Charbpos's is a Charcount, and character positions in a string are represented using a Charcount. Textcount: ---------- #### Document me. Bytecount: ---------- Similar to a Charcount but represents a count of bytes. The difference between two Bytebpos's is a Bytecount. @node Usage of the Various Representations, Working With the Various Representations, Other Typedefs, Byte/Character Types; Buffer Positions; Other Typedefs @subsection Usage of the Various Representations @cindex usage of the various representations Memory indices are used in low-level functions in insdel.c and for extent endpoints and marker positions. The reason for this is that this way, the extents and markers don't need to be updated for most insertions, which merely shrink the gap and don't move any characters around in memory. (The beginning-of-gap memory index simplifies insertions w.r.t. markers, because text usually gets inserted after markers. For extents, it is merely for consistency, because text can get inserted either before or after an extent's endpoint depending on the open/closedness of the endpoint.) Byte indices are used in other code that needs to be fast, such as the searching, redisplay, and extent-manipulation code. Buffer positions are used in all other code. This is because this representation is easiest to work with (especially since Lisp code always uses buffer positions), necessitates the fewest changes to existing code, and is the safest (e.g. if the text gets shifted underneath a buffer position, it will still point to a character; if text is shifted under a byte index, it might point to the middle of a character, which would be bad). Similarly, Charcounts are used in all code that deals with strings except for code that needs to be fast, which used Bytecounts. Strings are always passed around internally using internal format. Conversions between external format are performed at the time that the data goes in or out of Emacs. @node Working With the Various Representations, , Usage of the Various Representations, Byte/Character Types; Buffer Positions; Other Typedefs @subsection Working With the Various Representations @cindex working with the various representations We write things this way because it's very important the MAX_BYTEBPOS_GAP_SIZE_3 is a multiple of 3. (As it happens, 65535 is a multiple of 3, but this may not always be the case. #### unfinished @node Internal Text API's, Coding for Mule, Byte/Character Types; Buffer Positions; Other Typedefs, Multilingual Support @section Internal Text API's @cindex internal text API's @cindex text API's, internal @cindex API's, text, internal @strong{NOTE}: The most current documentation for these API's is in @file{text.h}. In case of error, assume that file is correct and this one wrong. @menu * Basic internal-format API's:: * The DFC API:: * The Eistring API:: @end menu @node Basic internal-format API's, The DFC API, Internal Text API's, Internal Text API's @subsection Basic internal-format API's @cindex basic internal-format API's @cindex internal-format API's, basic @cindex API's, basic internal-format These are simple functions and macros to convert between text representation and characters, move forward and back in text, etc. #### Finish the rest of this. Use the following functions/macros on contiguous text in any of the internal formats. Those that take a format arg work on all internal formats; the others work only on the default (variable-width under Mule) format. If the text you're operating on is known to come from a buffer, use the buffer-level functions in buffer.h, which automatically know the correct format and handle the gap. Some terminology: "itext" appearing in the macros means "internal-format text" -- type @code{Ibyte *}. Operations on such pointers themselves, rather than on the text being pointed to, have "itext" instead of "itext" in the macro name. "ichar" in the macro names means an Ichar -- the representation of a character as a single integer rather than a series of bytes, as part of "itext". Many of the macros below are for converting between the two representations of characters. Note also that we try to consistently distinguish between an "Ichar" and a Lisp character. Stuff working with Lisp characters often just says "char", so we consistently use "Ichar" when that's what we're working with. @node The DFC API, The Eistring API, Basic internal-format API's, Internal Text API's @subsection The DFC API @cindex DFC API @cindex API, DFC This is for conversion between internal and external text. Note that there is also the "new DFC" API, which @strong{returns} a pointer to the converted text (in alloca space), rather than storing it into a variable. The macros below are used for converting data between different formats. Generally, the data is textual, and the formats are related to internationalization (e.g. converting between internal-format text and UTF-8) -- but the mechanism is general, and could be used for anything, e.g. decoding gzipped data. In general, conversion involves a source of data, a sink, the existing format of the source data, and the desired format of the sink. The macros below, however, always require that either the source or sink is internal-format text. Therefore, in practice the conversions below involve source, sink, an external format (specified by a coding system), and the direction of conversion (internal->external or vice-versa). Sources and sinks can be raw data (sized or unsized -- when unsized, input data is assumed to be null-terminated [double null-terminated for Unicode-format data], and on output the length is not stored anywhere), Lisp strings, Lisp buffers, lstreams, and opaque data objects. When the output is raw data, the result can be allocated either with @code{alloca()} or @code{malloc()}. (There is currently no provision for writing into a fixed buffer. If you want this, use @code{alloca()} output and then copy the data -- but be careful with the size! Unless you are very sure of the encoding being used, upper bounds for the size are not in general computable.) The obvious restrictions on source and sink types apply (e.g. Lisp strings are a source and sink only for internal data). All raw data outputted will contain an extra null byte (two bytes for Unicode -- currently, in fact, all output data, whether internal or external, is double-null-terminated, but you can't count on this; see below). This means that enough space is allocated to contain the extra nulls; however, these nulls are not reflected in the returned output size. The most basic macros are TO_EXTERNAL_FORMAT and TO_INTERNAL_FORMAT. These can be used to convert between any kinds of sources or sinks. However, 99% of conversions involve raw data or Lisp strings as both source and sink, and usually data is output as @code{alloca()} rather than @code{malloc()}. For this reason, convenience macros are defined for many types of conversions involving raw data and/or Lisp strings, especially when the output is an @code{alloca()}ed string. (When the destination is a Lisp_String, there are other functions that should be used instead -- @code{build_ext_string()} and @code{make_ext_string()}, for example.) The convenience macros are of two types -- the older kind that store the result into a specified variable, and the newer kind that return the result. The newer kind of macros don't exist when the output is sized data, because that would have two return values. NOTE: All convenience macros are ultimately defined in terms of TO_EXTERNAL_FORMAT and TO_INTERNAL_FORMAT. Thus, any comments below about the workings of these macros also apply to all convenience macros. @example TO_EXTERNAL_FORMAT (source_type, source, sink_type, sink, codesys) TO_INTERNAL_FORMAT (source_type, source, sink_type, sink, codesys) @end example Typical use is @example TO_EXTERNAL_FORMAT (LISP_STRING, str, C_STRING_MALLOC, ptr, Qfile_name); @end example which means that the contents of the lisp string @var{str} are written to a malloc'ed memory area which will be pointed to by @var{ptr}, after the function returns. The conversion will be done using the @code{file-name} coding system (which will be controlled by the user indirectly by setting or binding the variable @code{file-name-coding-system}). Some sources and sinks require two C variables to specify. We use some preprocessor magic to allow different source and sink types, and even different numbers of arguments to specify different types of sources and sinks. So we can have a call that looks like @example TO_INTERNAL_FORMAT (DATA, (ptr, len), MALLOC, (ptr, len), coding_system); @end example The parenthesized argument pairs are required to make the preprocessor magic work. NOTE: GC is inhibited during the entire operation of these macros. This is because frequently the data to be converted comes from strings but gets passed in as just DATA, and GC may move around the string data. If we didn't inhibit GC, there'd have to be a lot of messy recoding, alloca-copying of strings and other annoying stuff. The source or sink can be specified in one of these ways: @example DATA, (ptr, len), // input data is a fixed buffer of size len ALLOCA, (ptr, len), // output data is in a @code{ALLOCA()}ed buffer of size len MALLOC, (ptr, len), // output data is in a @code{malloc()}ed buffer of size len C_STRING_ALLOCA, ptr, // equivalent to ALLOCA (ptr, len_ignored) on output C_STRING_MALLOC, ptr, // equivalent to MALLOC (ptr, len_ignored) on output C_STRING, ptr, // equivalent to DATA, (ptr, strlen/wcslen (ptr)) // on input (the Unicode version is used when correct) LISP_STRING, string, // input or output is a Lisp_Object of type string LISP_BUFFER, buffer, // output is written to (point) in lisp buffer LISP_LSTREAM, lstream, // input or output is a Lisp_Object of type lstream LISP_OPAQUE, object, // input or output is a Lisp_Object of type opaque @end example When specifying the sink, use lvalues, since the macro will assign to them, except when the sink is an lstream or a lisp buffer. For the sink types @code{ALLOCA} and @code{C_STRING_ALLOCA}, the resulting text is stored in a stack-allocated buffer, which is automatically freed on returning from the function. However, the sink types @code{MALLOC} and @code{C_STRING_MALLOC} return @code{xmalloc()}ed memory. The caller is responsible for freeing this memory using @code{xfree()}. The macros accept the kinds of sources and sinks appropriate for internal and external data representation. See the type_checking_assert macros below for the actual allowed types. Since some sources and sinks use one argument (a Lisp_Object) to specify them, while others take a (pointer, length) pair, we use some C preprocessor trickery to allow pair arguments to be specified by parenthesizing them, as in the examples above. Anything prefixed by dfc_ (`data format conversion') is private. They are only used to implement these macros. [[Using C_STRING* is appropriate for using with external APIs that take null-terminated strings. For internal data, we should try to be '\0'-clean - i.e. allow arbitrary data to contain embedded '\0'. Sometime in the future we might allow output to C_STRING_ALLOCA or C_STRING_MALLOC _only_ with @code{TO_EXTERNAL_FORMAT()}, not @code{TO_INTERNAL_FORMAT()}.]] The above comments are not true. Frequently (most of the time, in fact), external strings come as zero-terminated entities, where the zero-termination is the only way to find out the length. Even in cases where you can get the length, most of the time the system will still use the null to signal the end of the string, and there will still be no way to either send in or receive a string with embedded nulls. In such situations, it's pointless to track the length because null bytes can never be in the string. We have a lot of operations that make it easy to operate on zero-terminated strings, and forcing the user the deal with the length everywhere would only make the code uglier and more complicated, for no gain. --ben There is no problem using the same lvalue for source and sink. Also, when pointers are required, the code (currently at least) is lax and allows any pointer types, either in the source or the sink. This makes it possible, e.g., to deal with internal format data held in char *'s or external format data held in WCHAR * (i.e. Unicode). Finally, whenever storage allocation is called for, extra space is allocated for a terminating zero, and such a zero is stored in the appropriate place, regardless of whether the source data was specified using a length or was specified as zero-terminated. This allows you to freely pass the resulting data, no matter how obtained, to a routine that expects zero termination (modulo, of course, that any embedded zeros in the resulting text will cause truncation). In fact, currently two embedded zeros are allocated and stored after the data result. This is to allow for the possibility of storing a Unicode value on output, which needs the two zeros. Currently, however, the two zeros are stored regardless of whether the conversion is internal or external and regardless of whether the external coding system is in fact Unicode. This behavior may change in the future, and you cannot rely on this -- the most you can rely on is that sink data in Unicode format will have two terminating nulls, which combine to form one Unicode null character. NOTE: You might ask, why are these not written as functions that @strong{RETURN} the converted string, since that would allow them to be used much more conveniently, without having to constantly declare temporary variables? The answer is that in fact I originally did write the routines that way, but that required either @itemize @bullet @item (a) calling @code{alloca()} inside of a function call, or @item (b) using expressions separated by commas and a global temporary variable, or @item (c) using the GCC extension (@{ ... @}). @end itemize Turned out that all of the above had bugs, all caused by GCC (hence the comments about "those GCC wankers" and "ream gcc up the ass"). As for (a), some versions of GCC (especially on Intel platforms), which had buggy implementations of @code{alloca()} that couldn't handle being called inside of a function call -- they just decremented the stack right in the middle of pushing args. Oops, crash with stack trashing, very bad. (b) was an attempt to fix (a), and that led to further GCC crashes, esp. when you had two such calls in a single subexpression, because GCC couldn't be counted upon to follow even a minimally reasonable order of execution. True, you can't count on one argument being evaluated before another, but GCC would actually interleave them so that the temp var got stomped on by one while the other was accessing it. So I tried (c), which was problematic because that GCC extension has more bugs in it than a termite's nest. So reluctantly I converted to the current way. Now, that was awhile ago (c. 1994), and it appears that the bug involving alloca in function calls has long since been fixed. More recently, I defined the new-dfc routines down below, which DO allow exactly such convenience of returning your args rather than store them in temp variables, and I also wrote a configure check to see whether @code{alloca()} causes crashes inside of function calls, and if so use the portable @code{alloca()} implementation in alloca.c. If you define TEST_NEW_DFC, the old routines get written in terms of the new ones, and I've had a beta put out with this on and it appeared to this appears to cause no problems -- so we should consider switching, and feel no compunctions about writing further such function- like @code{alloca()} routines in lieu of statement-like ones. --ben @node The Eistring API, , The DFC API, Internal Text API's @subsection The Eistring API @cindex Eistring API @cindex API, Eistring (This API is currently under-used) When doing simple things with internal text, the basic internal-format API's are enough. But to do things like delete or replace a substring, concatenate various strings, etc. is difficult to do cleanly because of the allocation issues. The Eistring API is designed to deal with this, and provides a clean way of modifying and building up internal text. (Note that the former lack of this API has meant that some code uses Lisp strings to do similar manipulations, resulting in excess garbage and increased garbage collection.) NOTE: The Eistring API is (or should be) Mule-correct even without an ASCII-compatible internal representation. @example #### NOTE: This is a work in progress. Neither the API nor especially the implementation is finished. NOTE: An Eistring is a structure that makes it easy to work with internally-formatted strings of data. It provides operations similar in feel to the standard @code{strcpy()}, @code{strcat()}, @code{strlen()}, etc., but (a) it is Mule-correct (b) it does dynamic allocation so you never have to worry about size restrictions (c) it comes in an @code{ALLOCA()} variety (all allocation is stack-local, so there is no need to explicitly clean up) as well as a @code{malloc()} variety (d) it knows its own length, so it does not suffer from standard null byte brain-damage -- but it null-terminates the data anyway, so it can be passed to standard routines (e) it provides a much more powerful set of operations and knows about all the standard places where string data might reside: Lisp_Objects, other Eistrings, Ibyte * data with or without an explicit length, ASCII strings, Ichars, etc. (f) it provides easy operations to convert to/from externally-formatted data, and is easier to use than the standard TO_INTERNAL_FORMAT and TO_EXTERNAL_FORMAT macros. (An Eistring can store both the internal and external version of its data, but the external version is only initialized or changed when you call @code{eito_external()}.) The idea is to make it as easy to write Mule-correct string manipulation code as it is to write normal string manipulation code. We also make the API sufficiently general that it can handle multiple internal data formats (e.g. some fixed-width optimizing formats and a default variable width format) and allows for @strong{ANY} data format we might choose in the future for the default format, including UCS2. (In other words, we can't assume that the internal format is ASCII-compatible and we can't assume it doesn't have embedded null bytes. We do assume, however, that any chosen format will have the concept of null-termination.) All of this is hidden from the user. #### It is really too bad that we don't have a real object-oriented language, or at least a language with polymorphism! ********************************************** * Declaration * ********************************************** To declare an Eistring, either put one of the following in the local variable section: DECLARE_EISTRING (name); Declare a new Eistring and initialize it to the empy string. This is a standard local variable declaration and can go anywhere in the variable declaration section. NAME itself is declared as an Eistring *, and its storage declared on the stack. DECLARE_EISTRING_MALLOC (name); Declare and initialize a new Eistring, which uses @code{malloc()}ed instead of @code{ALLOCA()}ed data. This is a standard local variable declaration and can go anywhere in the variable declaration section. Once you initialize the Eistring, you will have to free it using @code{eifree()} to avoid memory leaks. You will need to use this form if you are passing an Eistring to any function that modifies it (otherwise, the modified data may be in stack space and get overwritten when the function returns). or use Eistring ei; void eiinit (Eistring *ei); void eiinit_malloc (Eistring *einame); If you need to put an Eistring elsewhere than in a local variable declaration (e.g. in a structure), declare it as shown and then call one of the init macros. Also note: void eifree (Eistring *ei); If you declared an Eistring to use @code{malloc()} to hold its data, or converted it to the heap using @code{eito_malloc()}, then this releases any data in it and afterwards resets the Eistring using @code{eiinit_malloc()}. Otherwise, it just resets the Eistring using @code{eiinit()}. ********************************************** * Conventions * ********************************************** - The names of the functions have been chosen, where possible, to match the names of @code{str*()} functions in the standard C API. - ********************************************** * Initialization * ********************************************** void eireset (Eistring *eistr); Initialize the Eistring to the empty string. void eicpy_* (Eistring *eistr, ...); Initialize the Eistring from somewhere: void eicpy_ei (Eistring *eistr, Eistring *eistr2); ... from another Eistring. void eicpy_lstr (Eistring *eistr, Lisp_Object lisp_string); ... from a Lisp_Object string. void eicpy_ch (Eistring *eistr, Ichar ch); ... from an Ichar (this can be a conventional C character). void eicpy_lstr_off (Eistring *eistr, Lisp_Object lisp_string, Bytecount off, Charcount charoff, Bytecount len, Charcount charlen); ... from a section of a Lisp_Object string. void eicpy_lbuf (Eistring *eistr, Lisp_Object lisp_buf, Bytecount off, Charcount charoff, Bytecount len, Charcount charlen); ... from a section of a Lisp_Object buffer. void eicpy_raw (Eistring *eistr, const Ibyte *data, Bytecount len); ... from raw internal-format data in the default internal format. void eicpy_rawz (Eistring *eistr, const Ibyte *data); ... from raw internal-format data in the default internal format that is "null-terminated" (the meaning of this depends on the nature of the default internal format). void eicpy_raw_fmt (Eistring *eistr, const Ibyte *data, Bytecount len, Internal_Format intfmt, Lisp_Object object); ... from raw internal-format data in the specified format. void eicpy_rawz_fmt (Eistring *eistr, const Ibyte *data, Internal_Format intfmt, Lisp_Object object); ... from raw internal-format data in the specified format that is "null-terminated" (the meaning of this depends on the nature of the specific format). void eicpy_c (Eistring *eistr, const Ascbyte *c_string); ... from an ASCII null-terminated string. Non-ASCII characters in the string are @strong{ILLEGAL} (read @code{abort()} with error-checking defined). void eicpy_c_len (Eistring *eistr, const Ascbyte *c_string, len); ... from an ASCII string, with length specified. Non-ASCII characters in the string are @strong{ILLEGAL} (read @code{abort()} with error-checking defined). void eicpy_ext (Eistring *eistr, const Extbyte *extdata, Lisp_Object codesys); ... from external null-terminated data, with coding system specified. void eicpy_ext_len (Eistring *eistr, const Extbyte *extdata, Bytecount extlen, Lisp_Object codesys); ... from external data, with length and coding system specified. void eicpy_lstream (Eistring *eistr, Lisp_Object lstream); ... from an lstream; reads data till eof. Data must be in default internal format; otherwise, interpose a decoding lstream. ********************************************** * Getting the data out of the Eistring * ********************************************** Ibyte *eidata (Eistring *eistr); Return a pointer to the raw data in an Eistring. This is NOT a copy. Lisp_Object eimake_string (Eistring *eistr); Make a Lisp string out of the Eistring. Lisp_Object eimake_string_off (Eistring *eistr, Bytecount off, Charcount charoff, Bytecount len, Charcount charlen); Make a Lisp string out of a section of the Eistring. void eicpyout_alloca (Eistring *eistr, LVALUE: Ibyte *ptr_out, LVALUE: Bytecount len_out); Make an @code{ALLOCA()} copy of the data in the Eistring, using the default internal format. Due to the nature of @code{ALLOCA()}, this must be a macro, with all lvalues passed in as parameters. (More specifically, not all compilers correctly handle using @code{ALLOCA()} as the argument to a function call -- GCC on x86 didn't used to, for example.) A pointer to the @code{ALLOCA()}ed data is stored in PTR_OUT, and the length of the data (not including the terminating zero) is stored in LEN_OUT. void eicpyout_alloca_fmt (Eistring *eistr, LVALUE: Ibyte *ptr_out, LVALUE: Bytecount len_out, Internal_Format intfmt, Lisp_Object object); Like @code{eicpyout_alloca()}, but converts to the specified internal format. (No formats other than FORMAT_DEFAULT are currently implemented, and you get an assertion failure if you try.) Ibyte *eicpyout_malloc (Eistring *eistr, Bytecount *intlen_out); Make a @code{malloc()} copy of the data in the Eistring, using the default internal format. This is a real function. No lvalues passed in. Returns the new data, and stores the length (not including the terminating zero) using INTLEN_OUT, unless it's a NULL pointer. Ibyte *eicpyout_malloc_fmt (Eistring *eistr, Internal_Format intfmt, Bytecount *intlen_out, Lisp_Object object); Like @code{eicpyout_malloc()}, but converts to the specified internal format. (No formats other than FORMAT_DEFAULT are currently implemented, and you get an assertion failure if you try.) ********************************************** * Moving to the heap * ********************************************** void eito_malloc (Eistring *eistr); Move this Eistring to the heap. Its data will be stored in a @code{malloc()}ed block rather than the stack. Subsequent changes to this Eistring will @code{realloc()} the block as necessary. Use this when you want the Eistring to remain in scope past the end of this function call. You will have to manually free the data in the Eistring using @code{eifree()}. void eito_alloca (Eistring *eistr); Move this Eistring back to the stack, if it was moved to the heap with @code{eito_malloc()}. This will automatically free any heap-allocated data. ********************************************** * Retrieving the length * ********************************************** Bytecount eilen (Eistring *eistr); Return the length of the internal data, in bytes. See also @code{eiextlen()}, below. Charcount eicharlen (Eistring *eistr); Return the length of the internal data, in characters. ********************************************** * Working with positions * ********************************************** Bytecount eicharpos_to_bytepos (Eistring *eistr, Charcount charpos); Convert a char offset to a byte offset. Charcount eibytepos_to_charpos (Eistring *eistr, Bytecount bytepos); Convert a byte offset to a char offset. Bytecount eiincpos (Eistring *eistr, Bytecount bytepos); Increment the given position by one character. Bytecount eiincpos_n (Eistring *eistr, Bytecount bytepos, Charcount n); Increment the given position by N characters. Bytecount eidecpos (Eistring *eistr, Bytecount bytepos); Decrement the given position by one character. Bytecount eidecpos_n (Eistring *eistr, Bytecount bytepos, Charcount n); Deccrement the given position by N characters. ********************************************** * Getting the character at a position * ********************************************** Ichar eigetch (Eistring *eistr, Bytecount bytepos); Return the character at a particular byte offset. Ichar eigetch_char (Eistring *eistr, Charcount charpos); Return the character at a particular character offset. ********************************************** * Setting the character at a position * ********************************************** Ichar eisetch (Eistring *eistr, Bytecount bytepos, Ichar chr); Set the character at a particular byte offset. Ichar eisetch_char (Eistring *eistr, Charcount charpos, Ichar chr); Set the character at a particular character offset. ********************************************** * Concatenation * ********************************************** void eicat_* (Eistring *eistr, ...); Concatenate onto the end of the Eistring, with data coming from the same places as above: void eicat_ei (Eistring *eistr, Eistring *eistr2); ... from another Eistring. void eicat_c (Eistring *eistr, Ascbyte *c_string); ... from an ASCII null-terminated string. Non-ASCII characters in the string are @strong{ILLEGAL} (read @code{abort()} with error-checking defined). void eicat_raw (ei, const Ibyte *data, Bytecount len); ... from raw internal-format data in the default internal format. void eicat_rawz (ei, const Ibyte *data); ... from raw internal-format data in the default internal format that is "null-terminated" (the meaning of this depends on the nature of the default internal format). void eicat_lstr (ei, Lisp_Object lisp_string); ... from a Lisp_Object string. void eicat_ch (ei, Ichar ch); ... from an Ichar. All except the first variety are convenience functions. n the general case, create another Eistring from the source.) ********************************************** * Replacement * ********************************************** void eisub_* (Eistring *eistr, Bytecount off, Charcount charoff, Bytecount len, Charcount charlen, ...); Replace a section of the Eistring, specifically: void eisub_ei (Eistring *eistr, Bytecount off, Charcount charoff, Bytecount len, Charcount charlen, Eistring *eistr2); ... with another Eistring. void eisub_c (Eistring *eistr, Bytecount off, Charcount charoff, Bytecount len, Charcount charlen, Ascbyte *c_string); ... with an ASCII null-terminated string. Non-ASCII characters in the string are @strong{ILLEGAL} (read @code{abort()} with error-checking defined). void eisub_ch (Eistring *eistr, Bytecount off, Charcount charoff, Bytecount len, Charcount charlen, Ichar ch); ... with an Ichar. void eidel (Eistring *eistr, Bytecount off, Charcount charoff, Bytecount len, Charcount charlen); Delete a section of the Eistring. ********************************************** * Converting to an external format * ********************************************** void eito_external (Eistring *eistr, Lisp_Object codesys); Convert the Eistring to an external format and store the result in the string. NOTE: Further changes to the Eistring will @strong{NOT} change the external data stored in the string. You will have to call @code{eito_external()} again in such a case if you want the external data. Extbyte *eiextdata (Eistring *eistr); Return a pointer to the external data stored in the Eistring as a result of a prior call to @code{eito_external()}. Bytecount eiextlen (Eistring *eistr); Return the length in bytes of the external data stored in the Eistring as a result of a prior call to @code{eito_external()}. ********************************************** * Searching in the Eistring for a character * ********************************************** Bytecount eichr (Eistring *eistr, Ichar chr); Charcount eichr_char (Eistring *eistr, Ichar chr); Bytecount eichr_off (Eistring *eistr, Ichar chr, Bytecount off, Charcount charoff); Charcount eichr_off_char (Eistring *eistr, Ichar chr, Bytecount off, Charcount charoff); Bytecount eirchr (Eistring *eistr, Ichar chr); Charcount eirchr_char (Eistring *eistr, Ichar chr); Bytecount eirchr_off (Eistring *eistr, Ichar chr, Bytecount off, Charcount charoff); Charcount eirchr_off_char (Eistring *eistr, Ichar chr, Bytecount off, Charcount charoff); ********************************************** * Searching in the Eistring for a string * ********************************************** Bytecount eistr_ei (Eistring *eistr, Eistring *eistr2); Charcount eistr_ei_char (Eistring *eistr, Eistring *eistr2); Bytecount eistr_ei_off (Eistring *eistr, Eistring *eistr2, Bytecount off, Charcount charoff); Charcount eistr_ei_off_char (Eistring *eistr, Eistring *eistr2, Bytecount off, Charcount charoff); Bytecount eirstr_ei (Eistring *eistr, Eistring *eistr2); Charcount eirstr_ei_char (Eistring *eistr, Eistring *eistr2); Bytecount eirstr_ei_off (Eistring *eistr, Eistring *eistr2, Bytecount off, Charcount charoff); Charcount eirstr_ei_off_char (Eistring *eistr, Eistring *eistr2, Bytecount off, Charcount charoff); Bytecount eistr_c (Eistring *eistr, Ascbyte *c_string); Charcount eistr_c_char (Eistring *eistr, Ascbyte *c_string); Bytecount eistr_c_off (Eistring *eistr, Ascbyte *c_string, Bytecount off, Charcount charoff); Charcount eistr_c_off_char (Eistring *eistr, Ascbyte *c_string, Bytecount off, Charcount charoff); Bytecount eirstr_c (Eistring *eistr, Ascbyte *c_string); Charcount eirstr_c_char (Eistring *eistr, Ascbyte *c_string); Bytecount eirstr_c_off (Eistring *eistr, Ascbyte *c_string, Bytecount off, Charcount charoff); Charcount eirstr_c_off_char (Eistring *eistr, Ascbyte *c_string, Bytecount off, Charcount charoff); ********************************************** * Comparison * ********************************************** int eicmp_* (Eistring *eistr, ...); int eicmp_off_* (Eistring *eistr, Bytecount off, Charcount charoff, Bytecount len, Charcount charlen, ...); int eicasecmp_* (Eistring *eistr, ...); int eicasecmp_off_* (Eistring *eistr, Bytecount off, Charcount charoff, Bytecount len, Charcount charlen, ...); int eicasecmp_i18n_* (Eistring *eistr, ...); int eicasecmp_i18n_off_* (Eistring *eistr, Bytecount off, Charcount charoff, Bytecount len, Charcount charlen, ...); Compare the Eistring with the other data. Return value same as from strcmp. The @code{*} is either @code{ei} for another Eistring (in which case @code{...} is an Eistring), or @code{c} for a pure-ASCII string (in which case @code{...} is a pointer to that string). For anything more complex, first create an Eistring out of the source. Comparison is either simple (@code{eicmp_...}), ASCII case-folding (@code{eicasecmp_...}), or multilingual case-folding (@code{eicasecmp_i18n_...}). More specifically, the prototypes are: int eicmp_ei (Eistring *eistr, Eistring *eistr2); int eicmp_off_ei (Eistring *eistr, Bytecount off, Charcount charoff, Bytecount len, Charcount charlen, Eistring *eistr2); int eicasecmp_ei (Eistring *eistr, Eistring *eistr2); int eicasecmp_off_ei (Eistring *eistr, Bytecount off, Charcount charoff, Bytecount len, Charcount charlen, Eistring *eistr2); int eicasecmp_i18n_ei (Eistring *eistr, Eistring *eistr2); int eicasecmp_i18n_off_ei (Eistring *eistr, Bytecount off, Charcount charoff, Bytecount len, Charcount charlen, Eistring *eistr2); int eicmp_c (Eistring *eistr, Ascbyte *c_string); int eicmp_off_c (Eistring *eistr, Bytecount off, Charcount charoff, Bytecount len, Charcount charlen, Ascbyte *c_string); int eicasecmp_c (Eistring *eistr, Ascbyte *c_string); int eicasecmp_off_c (Eistring *eistr, Bytecount off, Charcount charoff, Bytecount len, Charcount charlen, Ascbyte *c_string); int eicasecmp_i18n_c (Eistring *eistr, Ascbyte *c_string); int eicasecmp_i18n_off_c (Eistring *eistr, Bytecount off, Charcount charoff, Bytecount len, Charcount charlen, Ascbyte *c_string); ********************************************** * Case-changing the Eistring * ********************************************** void eilwr (Eistring *eistr); Convert all characters in the Eistring to lowercase. void eiupr (Eistring *eistr); Convert all characters in the Eistring to uppercase. @end example @node Coding for Mule, CCL, Internal Text API's, Multilingual Support @section Coding for Mule @cindex coding for Mule @cindex Mule, coding for Although Mule support is not compiled by default in XEmacs, many people are using it, and we consider it crucial that new code works correctly with multibyte characters. This is not hard; it is only a matter of following several simple user-interface guidelines. Even if you never compile with Mule, with a little practice you will find it quite easy to code Mule-correctly. Note that these guidelines are not necessarily tied to the current Mule implementation; they are also a good idea to follow on the grounds of code generalization for future I18N work. @menu * Character-Related Data Types:: * Working With Character and Byte Positions:: * Conversion to and from External Data:: * General Guidelines for Writing Mule-Aware Code:: * An Example of Mule-Aware Code:: * Mule-izing Code:: @end menu @node Character-Related Data Types, Working With Character and Byte Positions, Coding for Mule, Coding for Mule @subsection Character-Related Data Types @cindex character-related data types @cindex data types, character-related First, let's review the basic character-related datatypes used by XEmacs. Note that some of the separate @code{typedef}s are not mandatory, but they improve clarity of code a great deal, because one glance at the declaration can tell the intended use of the variable. @table @code @item Ichar @cindex Ichar An @code{Ichar} holds a single Emacs character. Obviously, the equality between characters and bytes is lost in the Mule world. Characters can be represented by one or more bytes in the buffer, and @code{Ichar} is a C type large enough to hold any character. (This currently isn't quite true for ISO 10646, which defines a character as a 31-bit non-negative quantity, while XEmacs characters are only 30-bits. This is irrelevant, unless you are considering using the ISO 10646 private groups to support really large private character sets---in particular, the Mule character set!---in a version of XEmacs using Unicode internally.) Without Mule support, an @code{Ichar} is equivalent to an @code{unsigned char}. [[This doesn't seem to be true; @file{lisp.h} unconditionally @samp{typedef}s @code{Ichar} to @code{int}.]] @item Ibyte @cindex Ibyte The data representing the text in a buffer or string is logically a set of @code{Ibyte}s. XEmacs does not work with the same character formats all the time; when reading characters from the outside, it decodes them to an internal format, and likewise encodes them when writing. @code{Ibyte} (in fact @code{unsigned char}) is the basic unit of XEmacs internal buffers and strings format. An @code{Ibyte *} is the type that points at text encoded in the variable-width internal encoding. One character can correspond to one or more @code{Ibyte}s. In the current Mule implementation, an ASCII character is represented by the same @code{Ibyte}, and other characters are represented by a sequence of two or more @code{Ibyte}s. (This will also be true of an implementation using UTF-8 as the internal encoding. In fact, only code that implements character code conversions and a very few macros used to implement motion by whole characters will notice the difference between UTF-8 and the Mule encoding.) Without Mule support, there are exactly 256 characters, implicitly Latin-1, and each character is represented using one @code{Ibyte}, and there is a one-to-one correspondence between @code{Ibyte}s and @code{Ichar}s. @item Charxpos @item Charbpos @itemx Charcount @cindex Charxpos @cindex Charbpos @cindex Charcount A @code{Charbpos} represents a character position in a buffer. A @code{Charcount} represents a number (count) of characters. Logically, subtracting two @code{Charbpos} values yields a @code{Charcount} value. When representing a character position in a string, we just use @code{Charcount} directly. The reason for having a separate typedef for buffer positions is that they are 1-based, whereas string positions are 0-based and hence string counts and positions can be freely intermixed (a string position is equivalent to the count of characters from the beginning). When representing a character position that could be either in a buffer or string (for example, in the extent code), @code{Charxpos} is used. Although all of these are @code{typedef}ed to @code{EMACS_INT}, we use them in preference to @code{EMACS_INT} to make it clear what sort of position is being used. @code{Charxpos}, @code{Charbpos} and @code{Charcount} values are the only ones that are ever visible to Lisp. @item Bytexpos @itemx Bytecount @cindex Bytebpos @cindex Bytecount A @code{Bytebpos} represents a byte position in a buffer. A @code{Bytecount} represents the distance between two positions, in bytes. Byte positions in strings use @code{Bytecount}, and for byte positions that can be either in a buffer or string, @code{Bytexpos} is used. The relationship between @code{Bytexpos}, @code{Bytebpos} and @code{Bytecount} is the same as the relationship between @code{Charxpos}, @code{Charbpos} and @code{Charcount}. @item Extbyte @cindex Extbyte When dealing with the outside world, XEmacs works with @code{Extbyte}s, which are equivalent to @code{char}. The distance between two @code{Extbyte}s is a @code{Bytecount}, since external text is a byte-by-byte encoding. Extbytes occur mainly at the transition point between internal text and external functions. XEmacs code should not, if it can possibly avoid it, do any actual manipulation using external text, since its format is completely unpredictable (it might not even be ASCII-compatible). @end table @node Working With Character and Byte Positions, Conversion to and from External Data, Character-Related Data Types, Coding for Mule @subsection Working With Character and Byte Positions @cindex character and byte positions, working with @cindex byte positions, working with character and @cindex positions, working with character and byte Now that we have defined the basic character-related types, we can look at the macros and functions designed for work with them and for conversion between them. Most of these macros are defined in @file{buffer.h}, and we don't discuss all of them here, but only the most important ones. Examining the existing code is the best way to learn about them. @table @code @item MAX_ICHAR_LEN @cindex MAX_ICHAR_LEN This preprocessor constant is the maximum number of buffer bytes to represent an Emacs character in the variable width internal encoding. It is useful when allocating temporary strings to keep a known number of characters. For instance: @example @group @{ Charcount cclen; ... @{ /* Allocate place for @var{cclen} characters. */ Ibyte *buf = (Ibyte *) alloca (cclen * MAX_ICHAR_LEN); ... @end group @end example If you followed the previous section, you can guess that, logically, multiplying a @code{Charcount} value with @code{MAX_ICHAR_LEN} produces a @code{Bytecount} value. In the current Mule implementation, @code{MAX_ICHAR_LEN} equals 4. Without Mule, it is 1. In a mature Unicode-based XEmacs, it will also be 4 (since all Unicode characters can be encoded in UTF-8 in 4 bytes or less), but some versions may use up to 6, in order to use the large private space provided by ISO 10646 to ``mirror'' the Mule code space. @item itext_ichar @itemx set_itext_ichar @cindex itext_ichar @cindex set_itext_ichar The @code{itext_ichar} macro takes a @code{Ibyte} pointer and returns the @code{Ichar} stored at that position. If it were a function, its prototype would be: @example Ichar itext_ichar (Ibyte *p); @end example @code{set_itext_ichar} stores an @code{Ichar} to the specified byte position. It returns the number of bytes stored: @example Bytecount set_itext_ichar (Ibyte *p, Ichar c); @end example It is important to note that @code{set_itext_ichar} is safe only for appending a character at the end of a buffer, not for overwriting a character in the middle. This is because the width of characters varies, and @code{set_itext_ichar} cannot resize the string if it writes, say, a two-byte character where a single-byte character used to reside. A typical use of @code{set_itext_ichar} can be demonstrated by this example, which copies characters from buffer @var{buf} to a temporary string of Ibytes. @example @group @{ Charbpos pos; for (pos = beg; pos < end; pos++) @{ Ichar c = BUF_FETCH_CHAR (buf, pos); p += set_itext_ichar (buf, c); @} @} @end group @end example Note how @code{set_itext_ichar} is used to store the @code{Ichar} and increment the counter, at the same time. @item INC_IBYTEPTR @itemx DEC_IBYTEPTR @cindex INC_IBYTEPTR @cindex DEC_IBYTEPTR These two macros increment and decrement an @code{Ibyte} pointer, respectively. They will adjust the pointer by the appropriate number of bytes according to the byte length of the character stored there. Both macros assume that the memory address is located at the beginning of a valid character. Without Mule support, @code{INC_IBYTEPTR (p)} and @code{DEC_IBYTEPTR (p)} simply expand to @code{p++} and @code{p--}, respectively. @item bytecount_to_charcount @cindex bytecount_to_charcount Given a pointer to a text string and a length in bytes, return the equivalent length in characters. @example Charcount bytecount_to_charcount (Ibyte *p, Bytecount bc); @end example @item charcount_to_bytecount @cindex charcount_to_bytecount Given a pointer to a text string and a length in characters, return the equivalent length in bytes. @example Bytecount charcount_to_bytecount (Ibyte *p, Charcount cc); @end example @item itext_n_addr @cindex itext_n_addr Return a pointer to the beginning of the character offset @var{cc} (in characters) from @var{p}. @example Ibyte *itext_n_addr (Ibyte *p, Charcount cc); @end example @end table @node Conversion to and from External Data, General Guidelines for Writing Mule-Aware Code, Working With Character and Byte Positions, Coding for Mule @subsection Conversion to and from External Data @cindex conversion to and from external data @cindex external data, conversion to and from When an external function, such as a C library function, returns a @code{char} pointer, you should almost never treat it as @code{Ibyte}. This is because these returned strings may contain 8bit characters which can be misinterpreted by XEmacs, and cause a crash. Likewise, when exporting a piece of internal text to the outside world, you should always convert it to an appropriate external encoding, lest the internal stuff (such as the infamous \201 characters) leak out. The interface to conversion between the internal and external representations of text are the numerous conversion macros defined in @file{buffer.h}. There used to be a fixed set of external formats supported by these macros, but now any coding system can be used with them. The coding system alias mechanism is used to create the following logical coding systems, which replace the fixed external formats. The (dontusethis-set-symbol-value-handler) mechanism was enhanced to make this possible (more work on that is needed). Often useful coding systems: @table @code @item Qbinary This is the simplest format and is what we use in the absence of a more appropriate format. This converts according to the @code{binary} coding system: @enumerate a @item On input, bytes 0--255 are converted into (implicitly Latin-1) characters 0--255. A non-Mule xemacs doesn't really know about different character sets and the fonts to display them, so the bytes can be treated as text in different 1-byte encodings by simply setting the appropriate fonts. So in a sense, non-Mule xemacs is a multi-lingual editor if, for example, different fonts are used to display text in different buffers, faces, or windows. The specifier mechanism gives the user complete control over this kind of behavior. @item On output, characters 0--255 are converted into bytes 0--255 and other characters are converted into @samp{~}. @end enumerate @item Qnative Format used for the external Unix environment---@code{argv[]}, stuff from @code{getenv()}, stuff from the @file{/etc/passwd} file, etc. This is encoded according to the encoding specified by the current locale. [[This is dangerous; current locale is user preference, and the system is probably going to be something else. Is there anything we can do about it?]] @item Qfile_name Format used for filenames. This is normally the same as @code{Qnative}, but the two should be distinguished for clarity and possible future separation -- and also because @code{Qfile_name} can be changed using either the @code{file-name-coding-system} or @code{pathname-coding-system} (now obsolete) variables. @item Qctext Compound-text format. This is the standard X11 format used for data stored in properties, selections, and the like. This is an 8-bit no-lock-shift ISO2022 coding system. This is a real coding system, unlike @code{Qfile_name}, which is user-definable. @item Qmswindows_tstr Used for external data in all MS Windows functions that are declared to accept data of type @code{LPTSTR} or @code{LPCSTR}. This maps to either @code{Qmswindows_multibyte} (a locale-specific encoding, same as @code{Qnative}) or @code{Qmswindows_unicode}, depending on whether XEmacs is being run under Windows 9X or Windows NT/2000/XP. @end table Many other coding systems are provided by default. There are two fundamental macros to convert between external and internal format, as well as various convenience macros to simplify the most common operations. @code{TO_INTERNAL_FORMAT} converts external data to internal format, and @code{TO_EXTERNAL_FORMAT} converts the other way around. The arguments each of these receives are a source type, a source, a sink type, a sink, and a coding system (or a symbol naming a coding system). A typical call looks like @example TO_EXTERNAL_FORMAT (LISP_STRING, str, C_STRING_MALLOC, ptr, Qfile_name); @end example which means that the contents of the lisp string @code{str} are written to a malloc'ed memory area which will be pointed to by @code{ptr}, after the function returns. The conversion will be done using the @code{file-name} coding system, which will be controlled by the user indirectly by setting or binding the variable @code{file-name-coding-system}. Some sources and sinks require two C variables to specify. We use some preprocessor magic to allow different source and sink types, and even different numbers of arguments to specify different types of sources and sinks. So we can have a call that looks like @example TO_INTERNAL_FORMAT (DATA, (ptr, len), MALLOC, (ptr, len), coding_system); @end example The parenthesized argument pairs are required to make the preprocessor magic work. Here are the different source and sink types: @table @code @item @code{DATA, (ptr, len),} input data is a fixed buffer of size @var{len} at address @var{ptr} @item @code{ALLOCA, (ptr, len),} output data is placed in an @code{alloca()}ed buffer of size @var{len} pointed to by @var{ptr} @item @code{MALLOC, (ptr, len),} output data is in a @code{malloc()}ed buffer of size @var{len} pointed to by @var{ptr} @item @code{C_STRING_ALLOCA, ptr,} equivalent to @code{ALLOCA (ptr, len_ignored)} on output. @item @code{C_STRING_MALLOC, ptr,} equivalent to @code{MALLOC (ptr, len_ignored)} on output @item @code{C_STRING, ptr,} equivalent to @code{DATA, (ptr, strlen/wcslen (ptr))} on input @item @code{LISP_STRING, string,} input or output is a Lisp_Object of type string @item @code{LISP_BUFFER, buffer,} output is written to @code{(point)} in lisp buffer @var{buffer} @item @code{LISP_LSTREAM, lstream,} input or output is a Lisp_Object of type lstream @item @code{LISP_OPAQUE, object,} input or output is a Lisp_Object of type opaque @end table A source type of @code{C_STRING} or a sink type of @code{C_STRING_ALLOCA} or @code{C_STRING_MALLOC} is appropriate where the external API is not '\0'-byte-clean -- i.e. it expects strings to be terminated with a null byte. For external API's that are in fact '\0'-byte-clean, we should of course not use these. The sinks to be specified must be lvalues, unless they are the lisp object types @code{LISP_LSTREAM} or @code{LISP_BUFFER}. There is no problem using the same lvalue for source and sink. Garbage collection is inhibited during these conversion operations, so it is OK to pass in data from Lisp strings using @code{XSTRING_DATA}. For the sink types @code{ALLOCA} and @code{C_STRING_ALLOCA}, the resulting text is stored in a stack-allocated buffer, which is automatically freed on returning from the function. However, the sink types @code{MALLOC} and @code{C_STRING_MALLOC} return @code{xmalloc()}ed memory. The caller is responsible for freeing this memory using @code{xfree()}. Note that it doesn't make sense for @code{LISP_STRING} to be a source for @code{TO_INTERNAL_FORMAT} or a sink for @code{TO_EXTERNAL_FORMAT}. You'll get an assertion failure if you try. 99% of conversions involve raw data or Lisp strings as both source and sink, and usually data is output as @code{alloca()}, or sometimes @code{xmalloc()}. For this reason, convenience macros are defined for many types of conversions involving raw data and/or Lisp strings, especially when the output is an @code{alloca()}ed string. (When the destination is a Lisp string, there are other functions that should be used instead -- @code{build_ext_string()} and @code{make_ext_string()}, for example.) The convenience macros are of two types -- the older kind that store the result into a specified variable, and the newer kind that return the result. The newer kind of macros don't exist when the output is sized data, because that would have two return values. NOTE: All convenience macros are ultimately defined in terms of @code{TO_EXTERNAL_FORMAT} and @code{TO_INTERNAL_FORMAT}. Thus, any comments above about the workings of these macros also apply to all convenience macros. A typical old-style convenience macro is @example C_STRING_TO_EXTERNAL (in, out, codesys); @end example This is equivalent to @example TO_EXTERNAL_FORMAT (C_STRING, in, C_STRING_ALLOCA, out, codesys); @end example but is easier to write and somewhat clearer, since it clearly identifies the arguments without the clutter of having the preprocessor types mixed in. The new-style equivalent is @code{NEW_C_STRING_TO_EXTERNAL (src, codesys)}, which @emph{returns} the converted data (still in @code{alloca()} space). This is far more convenient for most operations. @node General Guidelines for Writing Mule-Aware Code, An Example of Mule-Aware Code, Conversion to and from External Data, Coding for Mule @subsection General Guidelines for Writing Mule-Aware Code @cindex writing Mule-aware code, general guidelines for @cindex Mule-aware code, general guidelines for writing @cindex code, general guidelines for writing Mule-aware This section contains some general guidance on how to write Mule-aware code, as well as some pitfalls you should avoid. @table @emph @item Never use @code{char} and @code{char *}. In XEmacs, the use of @code{char} and @code{char *} is almost always a mistake. If you want to manipulate an Emacs character from ``C'', use @code{Ichar}. If you want to examine a specific octet in the internal format, use @code{Ibyte}. If you want a Lisp-visible character, use a @code{Lisp_Object} and @code{make_char}. If you want a pointer to move through the internal text, use @code{Ibyte *}. Also note that you almost certainly do not need @code{Ichar *}. Other typedefs to clarify the use of @code{char} are @code{Char_ASCII}, @code{Char_Binary}, @code{UChar_Binary}, and @code{CIbyte}. @item Be careful not to confuse @code{Charcount}, @code{Bytecount}, @code{Charbpos} and @code{Bytebpos}. The whole point of using different types is to avoid confusion about the use of certain variables. Lest this effect be nullified, you need to be careful about using the right types. @item Always convert external data It is extremely important to always convert external data, because XEmacs can crash if unexpected 8-bit sequences are copied to its internal buffers literally. This means that when a system function, such as @code{readdir}, returns a string, you normally need to convert it using one of the conversion macros described in the previous chapter, before passing it further to Lisp. Actually, most of the basic system functions that accept '\0'-terminated string arguments, like @code{stat()} and @code{open()}, have @strong{encapsulated} equivalents that do the internal to external conversion themselves. The encapsulated equivalents have a @code{qxe_} prefix and have string arguments of type @code{Ibyte *}, and you can pass internally encoded data to them, often from a Lisp string using @code{XSTRING_DATA}. (A better design might be to provide versions that accept Lisp strings directly.) [[Really? Then they'd either take @code{Lisp_Object}s and need to check type, or they'd take @code{Lisp_String}s, and violate the rules about passing any of the specific Lisp types.]] Also note that many internal functions, such as @code{make_string}, accept Ibytes, which removes the need for them to convert the data they receive. This increases efficiency because that way external data needs to be decoded only once, when it is read. After that, it is passed around in internal format. @item Do all work in internal format External-formatted data is completely unpredictable in its format. It may be fixed-width Unicode (not even ASCII compatible); it may be a modal encoding, in which case some occurrences of (e.g.) the slash character may be part of two-byte Asian-language characters, and a naive attempt to split apart a pathname by slashes will fail; etc. Internal-format text should be converted to external format only at the point where an external API is actually called, and the first thing done after receiving external-format text from an external API should be to convert it to internal text. @end table @node An Example of Mule-Aware Code, Mule-izing Code, General Guidelines for Writing Mule-Aware Code, Coding for Mule @subsection An Example of Mule-Aware Code @cindex code, an example of Mule-aware @cindex Mule-aware code, an example of As an example of Mule-aware code, we will analyze the @code{string} function, which conses up a Lisp string from the character arguments it receives. Here is the definition, pasted from @code{alloc.c}: @example @group DEFUN ("string", Fstring, 0, MANY, 0, /* Concatenate all the argument characters and make the result a string. */ (int nargs, Lisp_Object *args)) @{ Ibyte *storage = alloca_array (Ibyte, nargs * MAX_ICHAR_LEN); Ibyte *p = storage; for (; nargs; nargs--, args++) @{ Lisp_Object lisp_char = *args; CHECK_CHAR_COERCE_INT (lisp_char); p += set_itext_ichar (p, XCHAR (lisp_char)); @} return make_string (storage, p - storage); @} @end group @end example Now we can analyze the source line by line. Obviously, string will be as long as there are arguments to the function. This is why we allocate @code{MAX_ICHAR_LEN} * @var{nargs} bytes on the stack, i.e. the worst-case number of bytes for @var{nargs} @code{Ichar}s to fit in the string. Then, the loop checks that each element is a character, converting integers in the process. Like many other functions in XEmacs, this function silently accepts integers where characters are expected, for historical and compatibility reasons. Unless you know what you are doing, @code{CHECK_CHAR} will also suffice. @code{XCHAR (lisp_char)} extracts the @code{Ichar} from the @code{Lisp_Object}, and @code{set_itext_ichar} stores it to storage, increasing @code{p} in the process. Other instructive examples of correct coding under Mule can be found all over the XEmacs code. For starters, I recommend @code{Fnormalize_menu_item_name} in @file{menubar.c}. After you have understood this section of the manual and studied the examples, you can proceed writing new Mule-aware code. @node Mule-izing Code, , An Example of Mule-Aware Code, Coding for Mule @subsection Mule-izing Code A lot of code is written without Mule in mind, and needs to be made Mule-correct or "Mule-ized". There is really no substitute for line-by-line analysis when doing this, but the following checklist can help: @itemize @bullet @item Check all uses of @code{XSTRING_DATA}. @item Check all uses of @code{build_string} and @code{make_string}. @item Check all uses of @code{tolower} and @code{toupper}. @item Check object print methods. @item Check for use of functions such as @code{write_c_string}, @code{write_fmt_string}, @code{stderr_out}, @code{stdout_out}. @item Check all occurrences of @code{char} and correct to one of the other typedefs described above. @item Check all existing uses of @code{TO_EXTERNAL_FORMAT}, @code{TO_INTERNAL_FORMAT}, and any convenience macros (grep for @samp{EXTERNAL_TO}, @samp{TO_EXTERNAL}, and @samp{TO_SIZED_EXTERNAL}). @item In Windows code, string literals may need to be encapsulated with @code{XETEXT}. @end itemize @node CCL, Microsoft Windows-Related Multilingual Issues, Coding for Mule, Multilingual Support @section CCL @cindex CCL @example MACHINE CODE: The machine code consists of a vector of 32-bit words. The first such word specifies the start of the EOF section of the code; this is the code executed to handle any stuff that needs to be done (e.g. designating back to ASCII and left-to-right mode) after all other encoded/decoded data has been written out. This is not used for charset CCL programs. REGISTER: 0..7 -- referred by RRR or rrr OPERATOR BIT FIELD (27-bit): XXXXXXXXXXXXXXX RRR TTTTT TTTTT (5-bit): operator type RRR (3-bit): register number XXXXXXXXXXXXXXXX (15-bit): CCCCCCCCCCCCCCC: constant or address 000000000000rrr: register number AAAA: 00000 + 00001 - 00010 * 00011 / 00100 % 00101 & 00110 | 00111 ~ 01000 << 01001 >> 01010 <8 01011 >8 01100 // 01101 not used 01110 not used 01111 not used 10000 < 10001 > 10010 == 10011 <= 10100 >= 10101 != OPERATORS: TTTTT RRR XX.. SetCS: 00000 RRR C...C RRR = C...C SetCL: 00001 RRR ..... RRR = c...c c.............c SetR: 00010 RRR ..rrr RRR = rrr SetA: 00011 RRR ..rrr RRR = array[rrr] C.............C size of array = C...C c.............c contents = c...c Jump: 00100 000 c...c jump to c...c JumpCond: 00101 RRR c...c if (!RRR) jump to c...c WriteJump: 00110 RRR c...c Write1 RRR, jump to c...c WriteReadJump: 00111 RRR c...c Write1, Read1 RRR, jump to c...c WriteCJump: 01000 000 c...c Write1 C...C, jump to c...c C...C WriteCReadJump: 01001 RRR c...c Write1 C...C, Read1 RRR, C.............C and jump to c...c WriteSJump: 01010 000 c...c WriteS, jump to c...c C.............C S.............S ... WriteSReadJump: 01011 RRR c...c WriteS, Read1 RRR, jump to c...c C.............C S.............S ... WriteAReadJump: 01100 RRR c...c WriteA, Read1 RRR, jump to c...c C.............C size of array = C...C c.............c contents = c...c ... Branch: 01101 RRR C...C if (RRR >= 0 && RRR < C..) c.............c branch to (RRR+1)th address Read1: 01110 RRR ... read 1-byte to RRR Read2: 01111 RRR ..rrr read 2-byte to RRR and rrr ReadBranch: 10000 RRR C...C Read1 and Branch c.............c ... Write1: 10001 RRR ..... write 1-byte RRR Write2: 10010 RRR ..rrr write 2-byte RRR and rrr WriteC: 10011 000 ..... write 1-char C...CC C.............C WriteS: 10100 000 ..... write C..-byte of string C.............C S.............S ... WriteA: 10101 RRR ..... write array[RRR] C.............C size of array = C...C c.............c contents = c...c ... End: 10110 000 ..... terminate the execution SetSelfCS: 10111 RRR C...C RRR AAAAA= C...C ..........AAAAA SetSelfCL: 11000 RRR ..... RRR AAAAA= c...c c.............c ..........AAAAA SetSelfR: 11001 RRR ..Rrr RRR AAAAA= rrr ..........AAAAA SetExprCL: 11010 RRR ..Rrr RRR = rrr AAAAA c...c c.............c ..........AAAAA SetExprR: 11011 RRR ..rrr RRR = rrr AAAAA Rrr ............Rrr ..........AAAAA JumpCondC: 11100 RRR c...c if !(RRR AAAAA C..) jump to c...c C.............C ..........AAAAA JumpCondR: 11101 RRR c...c if !(RRR AAAAA rrr) jump to c...c ............rrr ..........AAAAA ReadJumpCondC: 11110 RRR c...c Read1 and JumpCondC C.............C ..........AAAAA ReadJumpCondR: 11111 RRR c...c Read1 and JumpCondR ............rrr ..........AAAAA @end example @node Microsoft Windows-Related Multilingual Issues, Modules for Internationalization, CCL, Multilingual Support @section Microsoft Windows-Related Multilingual Issues @cindex Microsoft Windows-related multilingual issues @cindex Windows-related multilingual issues @cindex multilingual issues, Windows-related @menu * Microsoft Documentation:: * Locales:: * More about code pages:: * More about locales:: * Unicode support under Windows:: * The golden rules of writing Unicode-safe code:: * The format of the locale in setlocale():: * Random other Windows I18N docs:: @end menu @node Microsoft Documentation, Locales, Microsoft Windows-Related Multilingual Issues, Microsoft Windows-Related Multilingual Issues @subsection Microsoft Documentation @cindex Microsoft documentation Documentation on international support in Windows is scattered throughout MSDN. Here are some good places to look: @enumerate @item C Runtime (CRT) intl support @enumerate @item Visual Tools and Languages -> Visual Studio 6.0 Documentation -> Visual C++ Documentation -> Using Visual C++ -> Run-Time Library Reference -> Internationalization @item Visual Tools and Languages -> Visual Studio 6.0 Documentation -> Visual C++ Documentation -> Using Visual C++ -> Run-Time Library Reference -> Global Constants -> Locale Categories @item Visual Tools and Languages -> Visual Studio 6.0 Documentation -> Visual C++ Documentation -> Using Visual C++ -> Run-Time Library Reference -> Appendixes -> Language and Country/Region Strings @item Visual Tools and Languages -> Visual Studio 6.0 Documentation -> Visual C++ Documentation -> Using Visual C++ -> Run-Time Library Reference -> Appendixes -> Generic-Text Mappings @item Function documentation for various functions: Visual Tools and Languages -> Visual Studio 6.0 Documentation -> Visual C++ Documentation -> Using Visual C++ -> Run-Time Library Reference -> Alphabetic Function Reference e.g. _setmbcp(), setlocale(), strcoll functions @end enumerate @item Win32 API intl support @enumerate @item Platform SDK Documentation -> Base Services -> International Features @item Platform SDK Documentation -> User Interface Services -> Windows User Interface -> User Input -> Keyboard Input -> Character Messages -> International Features @item Backgrounders -> Windows Platform -> Windows 2000 -> International Support in Microsoft Windows 2000 @end enumerate @item Microsoft Layer for Unicode Platform SDK Documentation -> Windows API -> Windows 95/98/Me Programming -> Windows 95/98/Me Overviews -> Microsoft Layer for Unicode on Windows 95/98/Me Systems @item Look in the CRT sources! They come with VC++. See win32.c. @end enumerate @node Locales, More about code pages, Microsoft Documentation, Microsoft Windows-Related Multilingual Issues @subsection Locales, code pages, and other concepts of "language" @cindex locales, code pages, and other concepts of "language" First, make sure you clearly understand the difference between the C runtime library (CRT) and the Win32 API! See win32.c. There are various different ways of representing the vague concept of "language", and it can be very confusing. So: @itemize @bullet @item The CRT library has the concept of "locale", which is a combination of language and country, and which controls the way currency and dates are displayed, the encoding of data, etc. @item XEmacs has the concept of "language environment", more or less like a locale; although currently in most cases it just refers to the language, and no sub-language distinctions are made. (Exceptions are with Chinese, which has different language environments for Taiwan and mainland China, due to the different encodings and writing systems.) @item Windows has a number of different language concepts: @enumerate @item There are "languages" and "sublanguages", which correspond to the languages and countries of the C library -- e.g. LANG_ENGLISH and SUBLANG_ENGLISH_US. These are identified by 8-bit integers, called the "primary language identifier" and "sublanguage identifier", respectively. These are combined into a 16-bit integer or "language identifier" by MAKELANGID(). @item The language identifier in turn is combined with a "sort identifier" (and optionally a "sort version") to yield a 32-bit integer called a "locale identifier" (type LCID), which identifies locales -- the primary means of distinguishing language/regional settings and similar to C library locales. @item A "code page" combines the XEmacs concepts of "charset" and "coding system". It logically encompasses @itemize @minus @item a set of supported characters @item an enumeration associating each character with a code point, which is a number or number pair; there may be disjoint ranges of numbers supported @item a way of encoding a series of characters into a string of bytes @end itemize Note that the first two properties correspond to an XEmacs "charset" and the latter an XEmacs "coding system". Traditional encodings are either simple one-byte encodings, or combination one-byte/two-byte encodings (aka MBCS encodings, where MBCS stands for "Multibyte Character Set") with the following properties: @itemize @minus @item all characters are encoded as a one-byte or two-byte sequence @item the encoding is stateless (non-modal) @item the lower 128 bytes are compatible with ASCII @item in the higher bytes, the value of the first byte ("lead byte") determines whether a second byte follows @item the values used for second bytes may overlap those used for first bytes, and (in some encodings) include values in the low half; thus, moving backwards is hard, and pure-ASCII algorithms (e.g. finding the next slash) will fail unless rewritten to be MBCS-aware (neither of these problems exist in UTF-8 or in the XEmacs internal string encoding) @end itemize Recent code pages, however, do not necessarily follow these properties -- code pages have been expanded to include arbitrary encodings, such as UTF-8 (may have more than two bytes per character) and ISO-2022-JP (complex modal encoding). @item Every Windows locale has four associated code pages: ANSI (an international standard or some Microsoft-created approximation; the native code page under Windows), OEM (a DOS encoding, still used in the FAT file system), Mac (an encoding used on the Macintosh) and EBCDIC (a non-ASCII-compatible encoding used on IBM mainframes, originally based on the BCD or "binary-coded decimal" encoding of numbers). All code pages associated with a locale follow (as far as I know) the properties listed above for traditional code pages. More than one locale can share a code page -- e.g. all the Western European languages, including English, do. @item Windows also has an "input locale identifier" (aka "keyboard layout id") or HKL, which is a 32-bit integer composed of the 16-bit language identifier and a 16-bit "device identifier", which originally specified a particular keyboard layout (e.g. the locale "US English" can have the QWERTY layout, the Dvorak layout, etc.), but has been expanded to include speech-to-text converters and other non-keyboard ways of inputting text. Note that both the HKL and LCID share the language identifier in the lower 16 bits, and in both cases a 0 in the upper 16 bits means "default" (sort order or device), providing a way to convert between HKL's, LCID's, and language identifiers (i.e. language/sublanguage pairs). The default keyboard layout for a language is (as far as I can determine) established using the Regional Settings control panel applet, where you can add input locales as combinations of language (actually language/sublanguage) and layout; presumably if you list only one input locale with a particular language, the corresponding layout is the default for that language. But what if you list more than one? You can specify a single default input locale, but there appears to be no way to do so on a per-language basis. @end enumerate @end itemize @node More about code pages, More about locales, Locales, Microsoft Windows-Related Multilingual Issues @subsection More about code pages @cindex more about code pages Here is what MSDN says about code pages (article "Code Pages"): @quotation A code page is a character set, which can include numbers, punctuation marks, and other glyphs. Different languages and locales may use different code pages. For example, ANSI code page 1252 is used for American English and most European languages; OEM code page 932 is used for Japanese Kanji. A code page can be represented in a table as a mapping of characters to single-byte values or multibyte values. Many code pages share the ASCII character set for characters in the range 0x00 ?0x7F. The Microsoft run-time library uses the following types of code pages: -- System-default ANSI code page. By default, at startup the run-time system automatically sets the multibyte code page to the system-default ANSI code page, which is obtained from the operating system. The call setlocale ( LC_ALL, "" ); also sets the locale to the system-default ANSI code page. -- Locale code page. The behavior of a number of run-time routines is dependent on the current locale setting, which includes the locale code page. (For more information, see Locale-Dependent Routines.) By default, all locale-dependent routines in the Microsoft run-time library use the code page that corresponds to the ¡ë?locale. At run-time you can change or query the locale code page in use with a call to setlocale. -- Multibyte code page. The behavior of most of the multibyte-character routines in the run-time library depends on the current multibyte code page setting. By default, these routines use the system-default ANSI code page. At run-time you can query and change the multibyte code page with _getmbcp and _setmbcp, respectively. -- The "C" locale is defined by ANSI to correspond to the locale in which C programs have traditionally executed. The code page for the "C" locale (¡ë?code page) corresponds to the ASCII character set. For example, in the "C" locale, islower returns true for the values 0x61 ?0x7A only. In another locale, islower may return true for these as well as other values, as defined by that locale. Under "Locale-Dependent Routines" we notice the following setlocale dependencies: atof, atoi, atol (LC_NUMERIC) is Routines (LC_CTYPE) isleadbyte (LC_CTYPE) localeconv (LC_MONETARY, LC_NUMERIC) MB_CUR_MAX (LC_CTYPE) _mbccpy (LC_CTYPE) _mbclen (LC_CTYPE) mblen (LC_CTYPE ) _mbstrlen (LC_CTYPE) mbstowcs (LC_CTYPE) mbtowc (LC_CTYPE) printf (LC_NUMERIC, for radix character output) scanf (LC_NUMERIC, for radix character recognition) setlocale/_wsetlocale (Not applicable) strcoll (LC_COLLATE) _stricoll/_wcsicoll (LC_COLLATE) _strncoll/_wcsncoll (LC_COLLATE) _strnicoll/_wcsnicoll (LC_COLLATE) strftime, wcsftime (LC_TIME) _strlwr (LC_CTYPE) strtod/wcstod/strol/wcstol/strtoul/wcstoul (LC_NUMERIC, for radix character recognition) _strupr (LC_CTYPE) strxfrm/wcsxfrm (LC_COLLATE) tolower/towlower (LC_CTYPE) toupper/towupper (LC_CTYPE) wcstombs (LC_CTYPE) wctomb (LC_CTYPE) _wtoi/_wtol (LC_NUMERIC) @end quotation NOTE: The above documentation doesn't clearly explain the "locale code page" and "multibyte code page". These are two different values, maintained respectively in the CRT global variables __lc_codepage and __mbcodepage. Calling e.g. setlocale (LC_ALL, "JAPANESE") sets @strong{ONLY} __lc_codepage to 932 (the code page for Japanese), and leaves __mbcodepage unchanged (usually 1252, i.e. Windows-ANSI). You'd have to call _setmbcp() to change __mbcodepage. Figuring out from the documentation which routines use which code page is not so obvious. But: @itemize @bullet @item from "Interpretation of Multibyte-Character Sequences" it appears that all "multibyte-character routines" use the multibyte code page except for mblen(), _mbstrlen(), mbstowcs(), mbtowc(), wcstombs(), and wctomb(). @item from "_setmbcp": "The multibyte code page also affects multibyte-character processing by the following run-time library routines: _exec functions _mktemp _stat _fullpath _spawn functions _tempnam _makepath _splitpath tmpnam. In addition, all run-time library routines that receive multibyte-character argv or envp program arguments as parameters (such as the _exec and _spawn families) process these strings according to the multibyte code page. Hence these routines are also affected by a call to _setmbcp that changes the multibyte code page." @end itemize Summary: from looking at the CRT source (which comes with VC++) and carefully looking through the docs, it appears that: @itemize @bullet @item the "locale code page" is used by all of the routines listed above under "Locale-Dependent Routines" (EXCEPT _mbccpy() and _mbclen()), as well as any other place that converts between multibyte and Unicode strings, e.g. the startup code. @item the "multibyte code page" is used in all of the *mb*() routines except mblen(), _mbstrlen(), mbstowcs(), mbtowc(), wcstombs(), and wctomb(); also _exec*(), _spawn*(), _mktemp(), _stat(), _fullpath(), _tempnam(), _makepath(), _splitpath(), tmpnam(), and similar functions without the leading underscore. @end itemize @node More about locales, Unicode support under Windows, More about code pages, Microsoft Windows-Related Multilingual Issues @subsection More about locales @cindex more about locales In addition to the locale defined by the CRT, Windows (i.e. the Win32 API) defines various locales: @itemize @bullet @item The system-default locale is the locale defined under "Language settings for the system" in the "Regional Options" control panel. This is NOT user-specific, and changing it requires a reboot (at least under Windows 2000). The ANSI code page of the system-default locale is returned by GetACP(), and you can specify this code page in calls e.g. to MultiByteToWideChar with the constant CP_ACP. @item The user-default locale is the locale defined under "Settings for the current user" in the "Regional Options" control panel. @item There is a thread-local locale set by SetThreadLocale. #### What is this used for? @end itemize The Win32 API has a bunch of multibyte functions -- all of those that end with ...A(), and on which we spend so much effort in intl-encap-win32.c. These appear to ALWAYS use the ANSI code page of the system-default locale (GetACP(), CP_ACP). Note that this applies also, for example, to the encoding of filenames in all file-handling routines, including the CRT ones such as open(), because they pass their args unchanged to the Win32 API. @node Unicode support under Windows, The golden rules of writing Unicode-safe code, More about locales, Microsoft Windows-Related Multilingual Issues @subsection Unicode support under Windows @cindex unicode support under windows Basically, the whole concept of locales and code pages is broken, because it is extremely messy to support and does not allow for documents that use multiple languages simultaneously. Unicode was designed in response to this, the idea being to create a single character set that could be used to encode all the world's languages. Windows has supported Unicode since the beginning of the Win32 API. Internally, every code page has an associated table to convert the characters of that code page to and from Unicode, and the Win32 API itself probably (perhaps always) uses Unicode internally. Under Windows there are two different versions of all library routines that accept or return text, those that handle Unicode text and those handling "multibyte" text, i.e. variable-width ASCII-compatible text in some national format such as EUC or Shift-JIS. Because Windows 95 basically doesn't support Unicode but Windows NT does, and Microsoft doesn't provide any way of writing a single binary that will work on both systems and still use Unicode when it's available (although see below, Microsoft Layer for Unicode), we need to provide a way of run-time conditionalizing so you could have one binary for both systems. "Unicode-splitting" refers to writing code that will handle this properly. This means using Qmswindows_tstr as the external conversion format, calling the appropriate qxe...() Unicode-split version of library functions, and doing other things in certain cases, e.g. when a qxe() function is not present. Unicode support also requires that the various Windows API's be "Unicode-encapsulated", so that they automatically call the ANSI or Unicode version of the API call appropriately and handle the size differences in structures. What this means is: @itemize @bullet @item first, note that Windows already provides a sort of encapsulation of all API's that deal with text. All such API's are underlyingly provided in two versions, with an A or W suffix (ANSI or "wide" i.e. Unicode), and the compile-time constant UNICODE controls which is selected by the unsuffixed API. Same thing happens with structures, and also with types, where the generic types have names beginning with T -- TCHAR, LPTSTR, etc.. Unfortunately, this is compile-time only, not run-time, so not sufficient. (Creating the necessary run-time encoding is not conceptually difficult, but very time-consuming to write. It adds no significant overhead, and the only reason it's not standard in Windows is conscious marketing attempts by Microsoft to cripple Windows 95. FUCK MICROSOFT! They even describe in a KnowledgeBase article exactly how to create such an API [although we don't exactly follow their procedure], and point out its usefulness; the procedure is also described more generally in Nadine Kano's book on Win32 internationalization -- written SIX YEARS AGO! Obviously Microsoft has such an API available internally.) @item what we do is provide an encapsulation of each standard Windows API call that is split into A and W versions. current theory is to avoid all preprocessor games; so we name the function with a prefix -- "qxe" currently -- and require callers to use the prefixed name. Callers need to explicitly use the W version of all structures, and convert text themselves using Qmswindows_tstr. the qxe encapsulated version will automatically call the appropriate A or W version depending on whether we're running on 9x or NT (you can force use of the A calls on NT, e.g. for testing purposes, using the command- line switch -nuni aka -no-unicode-lib-calls), and copy data between W and A versions of the structures as necessary. @item We require the caller to handle the actual translation of text to avoid possible overflow when dealing with fixed-size Windows structures. There are no such problems when copying data between the A and W versions because ANSI text is never larger than its equivalent Unicode representation. @end itemize NOTE NOTE NOTE: As of August 2001, Microsoft (finally! See my nasty comment above) released their own Unicode-encapsulation library, called Microsoft Layer for Unicode on Windows 95/98/Me Systems. It tries to be more transparent than we are, in that @itemize @bullet @item its routines do ANSI/Unicode string translation, while we don't, for efficiency (we already have to do internal/external conversion so it's no extra burden to do the proper conversion directly rather than always converting to Unicode and then doing a second conversion to ANSI as necessary) @item rather than requiring separately-named routines (qxeFooBar), they physically override the existing routines at the link level. it also appears that they do this BADLY, in that if you link with the MLU, you get an application that runs ONLY on Win9x!!! (hint -- use GetProcAddress()). there's still no way to create a single binary! fucking losers. @item they assume you compile with UNICODE defined, so there's no need for the application to explicitly use ...W structures, as we require. @item they also intercept windows procedures to deal with notify messages as necessary, which we don't do yet. @item they (of course) don't use Extbyte. @end itemize at some point (especially when they fix the single-binary problem!), we should consider switching. for the meantime, we'll stick with what i've already written. perhaps we should think about adopting some of the greater transparency they have; but i opted against transparency on purpose, to make the code easier to follow for someone who's not familiar with it. until our library is really complete and bug-free, we should think twice before doing this. According to Microsoft documentation, only the following functions are provided under Windows 9x to support Unicode (see MSDN page "Windows 95/98/Me General Limitations"): EnumResourceLanguages EnumResourceNames EnumResourceTypes ExtTextOut FindResource FindResourceEx GetCharWidth GetCommandLine GetTextExtentPoint GetTextExtentPoint32 lstrcat lstrcpy lstrlen MessageBox MessageBoxEx MultiByteToWideChar TextOut WideCharToMultiByte also maybe GetTextExtentExPoint? (KB Q125671 "Unicode Functions Supported by Windows 95") However, the C runtime library provides some additional support (according to the CRT sources, as the docs are not very clear on this): @itemize @bullet @item wmain() is completely supported, and appropriate Unicode-formatted argv and envp will always be passed. @item Likewise, wWinMain() is completely supported. (NOTE: The docs are not at all clear on how these various entry points interact, and implies that a windows-subsystem program "must" use WinMain(), while a console- subsystem program "must" use main(), and a program compiled with UNICODE (which we don't, see above) "must" use the w*() versions, while a program not compiled this way "must" use the plain versions. In fact it appears that the CRT provides four different compiler entry points, namely w?(main|WinMain)CRTStartup, and we simply choose the one we like using the appropriate link flag. @item _wenviron, _wputenv @end itemize NOTE: @itemize @bullet @item wsetargv.obj uses routines that were buggily left out of MSVCRT; anyway, from looking at the source, it does NOT correctly work under Win 9x as it blindly calls the Unicode version of Unicode-split API's such as FindFirstFile) @item the w*() file routines are @strong{NOT} supported -- or at least, they blindly call the ...W() versions of the Win32 API calls. @end itemize @node The golden rules of writing Unicode-safe code, The format of the locale in setlocale(), Unicode support under Windows, Microsoft Windows-Related Multilingual Issues @subsection The golden rules of writing Unicode-safe code @cindex the golden rules of writing unicode-safe code @itemize @bullet @item There are no preprocessor games going on. @item Do not set the UNICODE constant. @item You need to change your code to call the Windows API prefixed with "qxe" functions (when they exist) and use the ...W structs instead of the generic ones. String arguments in the qxe functions are of type Extbyte *. @item You code is responsible for conversion of text arguments. We try to handle everything else -- the argument differences, the copying back and forth of structures, etc. Use Qmswindows_tstr and macros such as C_STRING_TO_TSTR. You are also responsible for interpreting and specifying string sizes, which have not been changed. Usually these are in characters, meaning you need to divide by XETCHAR_SIZE. (But, some functions want sizes in bytes, even with Unicode strings. Look in the documentation.) Use XETEXT when specifying string constants, so that they show up in Unicode as necessary. @item If you need to process external strings (in general you should not do this; do all your manipulations in internal format and convert at the point of entry into or exit from the function), use the xet...() functions. @item If you have to declare a fixed array to hold a string coming from Windows (and hence either multibyte or Unicode), declare it of type Extbyte[] and multiply the size by MAX_XETCHAR_SIZE. @end itemize @node The format of the locale in setlocale(), Random other Windows I18N docs, The golden rules of writing Unicode-safe code, Microsoft Windows-Related Multilingual Issues @subsection The format of the locale in setlocale() @cindex the format of the locale in setlocale() It appears that under Unix the standard format for the string in setlocale() involves two-letter language and country abbreviations, e.g. ja or ja_jp or ja_jp.euc for Japanese. Windows (MSDN article "Language Strings" in the run-time reference appendix, see doc list above) speaks of "(primary) language" and "sublanguage" (usually a country, but in the case of Chinese the sublanguage is "simplified" or "traditional"). It is highly flexible in what it takes, and thankfully it canonicalizes the result to a unique form "Language_Country.Encoding". It allows (note that all specifications can be in any case): @itemize @bullet @item the full "language_country.encoding" specification or just language_country", in which case the default encoding will be chosen. @item a three-letter acronym, consisting of the ISO-standard two-letter language abbreviation followed by a third letter indicating the sublanguage. @item just a language name, e.g. "dutch", standing for the combination of the language with "default" as sublanguage, referring to the default (often "prototypical") country for that language (in this case the Netherlands). You can abbreviate the name by removing any number of letters from the end. Ambiguity is not a problem: Even specifying just a single letter is valid providing any language starting with that letter exists, but the result may not be what you want (e.g. "c" maps to "catalan", not "chinese", "czech", etc.). The way of resolving ambiguity appears fairly random -- it's not alphabetical ("a" maps to "arabic" not "albanian"). @item a combination of language and sublanguage separated by a hyphen, e.g. "dutch-belgian"; note that the sublanguage designator in this case is NOT necessarily the same as the country, e.g. "belgian" vs. "belgium". "dutch-belgium" (or even "dutch-belg") does @strong{NOT} get you the right result, but returns "Dutch_Netherlands.1252" instead! This is because, although you may not abbreviate the result, Windows accepts any unknown value in the sublanguage field and treats it as equivalent to "default". Note also that the if the sublanguage name has underscores in it, you need to change them to spaces, e.g. "spanish-dominican republic". @item sometimes, just a sublanguage name, e.g. "belgian", standing for the combination of one of the languages spoken in that region and the sublanguage of the region -- in this case Dutch. Note that there is no guarantee of "protypicality" in this case in choice of language! You could hardly say that Dutch (aka Flemish) is more prototypical of Belgium than French. You cannot abbreviate this form, if it's allowed at all. @end itemize In addition: @itemize @bullet @item note further that you are not limited to the language/sublanguage combinations predefined by Windows. You can set weird combinations like "Chinese_Kenya.1255" (Chinese spoken in Kenya, represented by Windows-1255, i.e. Hebrew!) and Windows don't complain, despite the language-encoding inconsistency. You can also make up a weird combination and leave out the encoding, e.g. "Chinese_Qatar", which maps to "Chinese_Qatar.1256", where Windows-1256 is Arabic -- i.e. it appears to be choosing the encoding based on a default for the country. @item note also that the names for countries are often not what you expect. "urdu_pakistan" fails, and just "urdu" shows why, as it maps to "Urdu_Islamic Republic of Pakistan.1256". That is, some countries exist in their full name, and the canonicalized form with underscore is not very forgiving in its handling of country specifications. Similarly, Uzbekistan is "Republic of Uzbekistan", and "China" is "People's Republic of China" -- but in this latter case, unlike the other two, just "China" works as an alias, e.g. "uzbek_china" maps to "Uzbek_People's Republic of China.936". @item note that just the two-letter ISO language code is NOT allowed. Sometimes you'll get lucky (e.g. "fr" does map to "france"), but sometimes you'll get no match (e.g. "pl"), and sometimes you'll get really unlucky in that the call will succeed but with the wrong language (e.g. "es" maps to "estonian", not "spanish"). @end itemize As an example, MSDN article "Language Strings" indicates that German (default) can be specified using "deu" or "german"; German (Austrian) with "dea" or "german-austrian"; German (Swiss) with "des", "german-swiss", or "swiss"; French (Swiss) with "french-swiss" or "frs"; and English (USA) with "american", "american english", "american-english", "english-american", "english-us", "english-usa", "enu", "us", or "usa". This is not, of course, an exhaustive list even for just the given locales -- just "english" works in practice because English (Default) maps to English (USA). (#### Is this always the case?) Given the canonicalization, we don't have to worry too much about the different kinds of inputs to setlocale() -- unlike for Unix, where no canonicalization is usually performed, the particular locales that exist vary tremendously from OS to OS, and we need to parse the uncanonicalized locale spec, directly from the user, to figure out the encoding to use, making various guesses if not enough information is present. Yuck! The tricky thing under Windows is figuring how to deal with the sublang. It appears that the trick of simply passing the text of the manifest constant itself of the sublang, with appropriate hacking (e.g. of underscore to space), works most of the time. @node Random other Windows I18N docs, , The format of the locale in setlocale(), Microsoft Windows-Related Multilingual Issues @subsection Random other Windows I18N docs @cindex random other windows i18n docs Introduction to Internationalization Issues in the Win32 API Abstract: This page provides an overview of the aspects of the Win32 internationalization API that are relevant to XEmacs, including the basic distinction between multibyte and Unicode encodings. Also included are pointers to how XEmacs should make use of this API. The Win32 API is quite well-designed in its handling of strings encoded for various character sets. The API is geared around the idea that two different methods of encoding strings should be supported. These methods are called multibyte and Unicode, respectively. The multibyte encoding is compatible with ASCII strings and is a more efficient representation when dealing with strings containing primarily ASCII characters, but it has a great number of serious deficiencies and limitations, including that it is very difficult and error-prone to work with strings in this encoding, and any particular string in a multibyte encoding can only contain characters from a very limited number of character sets. The Unicode encoding rectifies all of these deficiencies, but it is not compatible with ASCII strings (in other words, an existing program will not be able to handle the encoded strings unless it is explicitly modified to do so), and it takes up twice as much memory space as multibyte encodings when encoding a purely ASCII string. Multibyte encodings use a variable number of bytes (either one or two) to represent characters. ASCII characters are also represented by a single byte with its high bit not set, and non-ASCII characters are represented by one or two bytes, the first of which always has its high bit set. (The second byte, when it exists, may or may not have its high bit set.) There is no single multibyte encoding. Instead, there is generally one encoding per non-ASCII character set. Such an encoding is capable of representing (besides ASCII characters, of course) only characters from one (or possibly two) particular character sets. Multibyte encoding makes processing of strings very difficult. For example, given a pointer to the beginning of a character within a string, finding the pointer to the beginning of the previous character may require backing up all the way to the beginning of the string, and then moving forward. Also, an operation such as separating out the components of a path by searching for backslashes will fail if it's implemented in the simplest (but not multibyte-aware) fashion, because it may find what appears to be a backslash, but which is actually the second byte of a two-byte character. Also, the limited number of character sets that any particular multibyte encoding can represent means that loss of data is likely if a string is converted from the XEmacs internal format into a multibyte format. For these reasons, the C code in XEmacs should never do any sort of work with multibyte encoded strings (or with strings in any external encoding for that matter). Strings should always be maintained in the internal encoding, which is predictable, and converted to an external encoding only at the point where the string moves from the XEmacs C code and enters a system library function. Similarly, when a string is returned from a system library function, it should be immediately converted into the internal coding before any operations are done on it. Unicode, unlike multibyte encodings, is a fixed-width encoding where every character is represented using 16 bits. It is also capable of encoding all the characters from all the character sets in common use in the world. The predictability and completeness of the Unicode encoding makes it a very good encoding for strings that may contain characters from many character sets mixed up with each other. At the same time, of course, it is incompatible with routines that expect ASCII characters and also incompatible with general string manipulation routines, which will encounter a great number of what would appear to be embedded nulls in the string. It also takes twice as much room to encode strings containing primarily ASCII characters. This is why XEmacs does not use Unicode or similar encoding internally for buffers. The Win32 API cleverly deals with the issue of 8 bit vs. 16 bit characters by declaring a type called TCHAR which specifies a generic character, either 8 bits or 16 bits. Generally TCHAR is defined to be the same as the simple C type char, unless the preprocessor constant UNICODE is defined, in which case TCHAR is defined to be WCHAR, which is a 16 bit type. Nearly all functions in the Win32 API that take strings are defined to take strings that are actually arrays of TCHARs. There is a type LPTSTR which is defined to be a string of TCHARs and another type LPCTSTR which is a const string of TCHARs. The theory is that any program that uses TCHARs exclusively to represent characters and does not make assumptions about the size of a TCHAR or the way that the characters are encoded should work transparently regardless of whether the UNICODE preprocessor constant is defined, which is to say, regardless of whether 8 bit multibyte or 16 bit Unicode characters are being used. The way that this is actually implemented is that every Win32 API function that takes a string as an argument actually maps to one of two functions which are suffixed with an A (which stands for ANSI, and means multibyte strings) or W (which stands for wide, and means Unicode strings). The mapping is, of course, controlled by the same UNICODE preprocessor constant. Generally all structures containing strings in them actually map to one of two different kinds of structures, with either an A or a W suffix after the structure name. Unfortunately, not all of the implementations of the Win32 API implement all of the functionality described above. In particular, Windows 95 does not implement very much Unicode functionality. It does implement functions to convert multibyte-encoded strings to and from Unicode strings, and provides Unicode versions of certain low-level functions like ExtTextOut(). In fact, all of the rest of the Unicode versions of API functions are just stubs that return an error. Conversely, all versions of Windows NT completely implement all the Unicode functionality, but some versions (especially versions before Windows NT 4.0) don't implement much of the multibyte functionality. For this reason, as well as for general code cleanliness, XEmacs needs to be written in such a way that it works with or without the UNICODE preprocessor constant being defined. Getting XEmacs to run when all strings are Unicode primarily involves removing any assumptions made about the size of characters. Remember what I said earlier about how the point of conversion between internally and externally encoded strings should occur at the point of entry or exit into or out of a library function. With this in mind, an externally encoded string in XEmacs can be treated simply as an arbitrary sequence of bytes of some length which has no particular relationship to the length of the string in the internal encoding. Use Qnative for Unix conversion, Qmswindows_tstr for Windows ... String constants that are to be passed directly to Win32 API functions, such as the names of window classes, need to be bracketed in their definition with a call to the macro XETEXT. This appropriately makes a string of either regular or wide chars, which is to say this string may be prepended with an L (causing it to be a wide string) depending on XEUNICODE_P. @node Modules for Internationalization, , Microsoft Windows-Related Multilingual Issues, Multilingual Support @section Modules for Internationalization @cindex modules for internationalization @cindex internationalization, modules for @example @file{mule-canna.c} @file{mule-ccl.c} @file{mule-charset.c} @file{mule-charset.h} @file{file-coding.c} @file{file-coding.h} @file{mule-coding.c} @file{mule-mcpath.c} @file{mule-mcpath.h} @file{mule-wnnfns.c} @file{mule.c} @end example These files implement the MULE (Asian-language) support. Note that MULE actually provides a general interface for all sorts of languages, not just Asian languages (although they are generally the most complicated to support). This code is still in beta. @file{mule-charset.*} and @file{file-coding.*} provide the heart of the XEmacs MULE support. @file{mule-charset.*} implements the @dfn{charset} Lisp object type, which encapsulates a character set (an ordered one- or two-dimensional set of characters, such as US ASCII or JISX0208 Japanese Kanji). @file{file-coding.*} implements the @dfn{coding-system} Lisp object type, which encapsulates a method of converting between different encodings. An encoding is a representation of a stream of characters, possibly from multiple character sets, using a stream of bytes or words, and defines (e.g.) which escape sequences are used to specify particular character sets, how the indices for a character are converted into bytes (sometimes this involves setting the high bit; sometimes complicated rearranging of the values takes place, as in the Shift-JIS encoding), etc. It also contains some generic coding system implementations, such as the binary (no-conversion) coding system and a sample gzip coding system. @file{mule-coding.c} contains the implementations of text coding systems. @file{mule-ccl.c} provides the CCL (Code Conversion Language) interpreter. CCL is similar in spirit to Lisp byte code and is used to implement converters for custom encodings. @file{mule-canna.c} and @file{mule-wnnfns.c} implement interfaces to external programs used to implement the Canna and WNN input methods, respectively. This is currently in beta. @file{mule-mcpath.c} provides some functions to allow for pathnames containing extended characters. This code is fragmentary, obsolete, and completely non-working. Instead, @code{pathname-coding-system} is used to specify conversions of names of files and directories. The standard C I/O functions like @samp{open()} are wrapped so that conversion occurs automatically. @file{mule.c} contains a few miscellaneous things. It currently seems to be unused and probably should be removed. @example @file{intl.c} @end example This provides some miscellaneous internationalization code for implementing message translation and interfacing to the Ximp input method. None of this code is currently working. @example @file{iso-wide.h} @end example This contains leftover code from an earlier implementation of Asian-language support, and is not currently used. @node Consoles; Devices; Frames; Windows, The Redisplay Mechanism, Multilingual Support, Top @chapter Consoles; Devices; Frames; Windows @cindex consoles; devices; frames; windows @cindex devices; frames; windows, consoles; @cindex frames; windows, consoles; devices; @cindex windows, consoles; devices; frames; @menu * Introduction to Consoles; Devices; Frames; Windows:: * Point:: * Window Hierarchy:: * The Window Object:: * Modules for the Basic Displayable Lisp Objects:: @end menu @node Introduction to Consoles; Devices; Frames; Windows, Point, Consoles; Devices; Frames; Windows, Consoles; Devices; Frames; Windows @section Introduction to Consoles; Devices; Frames; Windows @cindex consoles; devices; frames; windows, introduction to @cindex devices; frames; windows, introduction to consoles; @cindex frames; windows, introduction to consoles; devices; @cindex windows, introduction to consoles; devices; frames; A window-system window that you see on the screen is called a @dfn{frame} in Emacs terminology. Each frame is subdivided into one or more non-overlapping panes, called (confusingly) @dfn{windows}. Each window displays the text of a buffer in it. (See above on Buffers.) Note that buffers and windows are independent entities: Two or more windows can be displaying the same buffer (potentially in different locations), and a buffer can be displayed in no windows. A single display screen that contains one or more frames is called a @dfn{display}. Under most circumstances, there is only one display. However, more than one display can exist, for example if you have a @dfn{multi-headed} console, i.e. one with a single keyboard but multiple displays. (Typically in such a situation, the various displays act like one large display, in that the mouse is only in one of them at a time, and moving the mouse off of one moves it into another.) In some cases, the different displays will have different characteristics, e.g. one color and one mono. XEmacs can display frames on multiple displays. It can even deal simultaneously with frames on multiple keyboards (called @dfn{consoles} in XEmacs terminology). Here is one case where this might be useful: You are using XEmacs on your workstation at work, and leave it running. Then you go home and dial in on a TTY line, and you can use the already-running XEmacs process to display another frame on your local TTY. Thus, there is a hierarchy console -> display -> frame -> window. There is a separate Lisp object type for each of these four concepts. Furthermore, there is logically a @dfn{selected console}, @dfn{selected display}, @dfn{selected frame}, and @dfn{selected window}. Each of these objects is distinguished in various ways, such as being the default object for various functions that act on objects of that type. Note that every containing object remembers the ``selected'' object among the objects that it contains: e.g. not only is there a selected window, but every frame remembers the last window in it that was selected, and changing the selected frame causes the remembered window within it to become the selected window. Similar relationships apply for consoles to devices and devices to frames. @node Point, Window Hierarchy, Introduction to Consoles; Devices; Frames; Windows, Consoles; Devices; Frames; Windows @section Point @cindex point Recall that every buffer has a current insertion position, called @dfn{point}. Now, two or more windows may be displaying the same buffer, and the text cursor in the two windows (i.e. @code{point}) can be in two different places. You may ask, how can that be, since each buffer has only one value of @code{point}? The answer is that each window also has a value of @code{point} that is squirreled away in it. There is only one selected window, and the value of ``point'' in that buffer corresponds to that window. When the selected window is changed from one window to another displaying the same buffer, the old value of @code{point} is stored into the old window's ``point'' and the value of @code{point} from the new window is retrieved and made the value of @code{point} in the buffer. This means that @code{window-point} for the selected window is potentially inaccurate, and if you want to retrieve the correct value of @code{point} for a window, you must special-case on the selected window and retrieve the buffer's point instead. This is related to why @code{save-window-excursion} does not save the selected window's value of @code{point}. @node Window Hierarchy, The Window Object, Point, Consoles; Devices; Frames; Windows @section Window Hierarchy @cindex window hierarchy @cindex hierarchy of windows If a frame contains multiple windows (panes), they are always created by splitting an existing window along the horizontal or vertical axis. Terminology is a bit confusing here: to @dfn{split a window horizontally} means to create two side-by-side windows, i.e. to make a @emph{vertical} cut in a window. Likewise, to @dfn{split a window vertically} means to create two windows, one above the other, by making a @emph{horizontal} cut. If you split a window and then split again along the same axis, you will end up with a number of panes all arranged along the same axis. The precise way in which the splits were made should not be important, and this is reflected internally. Internally, all windows are arranged in a tree, consisting of two types of windows, @dfn{combination} windows (which have children, and are covered completely by those children) and @dfn{leaf} windows, which have no children and are visible. Every combination window has two or more children, all arranged along the same axis. There are (logically) two subtypes of windows, depending on whether their children are horizontally or vertically arrayed. There is always one root window, which is either a leaf window (if the frame contains only one window) or a combination window (if the frame contains more than one window). In the latter case, the root window will have two or more children, either horizontally or vertically arrayed, and each of those children will be either a leaf window or another combination window. Here are some rules: @enumerate @item Horizontal combination windows can never have children that are horizontal combination windows; same for vertical. @item Only leaf windows can be split (obviously) and this splitting does one of two things: (a) turns the leaf window into a combination window and creates two new leaf children, or (b) turns the leaf window into one of the two new leaves and creates the other leaf. Rule (1) dictates which of these two outcomes happens. @item Every combination window must have at least two children. @item Leaf windows can never become combination windows. They can be deleted, however. If this results in a violation of (3), the parent combination window also gets deleted. @item All functions that accept windows must be prepared to accept combination windows, and do something sane (e.g. signal an error if so). Combination windows @emph{do} escape to the Lisp level. @item All windows have three fields governing their contents: these are @dfn{hchild} (a list of horizontally-arrayed children), @dfn{vchild} (a list of vertically-arrayed children), and @dfn{buffer} (the buffer contained in a leaf window). Exactly one of these will be non-@code{nil}. Remember that @dfn{horizontally-arrayed} means ``side-by-side'' and @dfn{vertically-arrayed} means @dfn{one above the other}. @item Leaf windows also have markers in their @code{start} (the first buffer position displayed in the window) and @code{pointm} (the window's stashed value of @code{point}---see above) fields, while combination windows have @code{nil} in these fields. @item The list of children for a window is threaded through the @code{next} and @code{prev} fields of each child window. @item @strong{Deleted windows can be undeleted}. This happens as a result of restoring a window configuration, and is unlike frames, displays, and consoles, which, once deleted, can never be restored. Deleting a window does nothing except set a special @code{dead} bit to 1 and clear out the @code{next}, @code{prev}, @code{hchild}, and @code{vchild} fields, for GC purposes. @item Most frames actually have two top-level windows---one for the minibuffer and one (the @dfn{root}) for everything else. The modeline (if present) separates these two. The @code{next} field of the root points to the minibuffer, and the @code{prev} field of the minibuffer points to the root. The other @code{next} and @code{prev} fields are @code{nil}, and the frame points to both of these windows. Minibuffer-less frames have no minibuffer window, and the @code{next} and @code{prev} of the root window are @code{nil}. Minibuffer-only frames have no root window, and the @code{next} of the minibuffer window is @code{nil} but the @code{prev} points to itself. (#### This is an artifact that should be fixed.) @end enumerate @node The Window Object, Modules for the Basic Displayable Lisp Objects, Window Hierarchy, Consoles; Devices; Frames; Windows @section The Window Object @cindex window object, the @cindex object, the window Windows have the following accessible fields: @table @code @item frame The frame that this window is on. @item mini_p Non-@code{nil} if this window is a minibuffer window. @item buffer The buffer that the window is displaying. This may change often during the life of the window. @item dedicated Non-@code{nil} if this window is dedicated to its buffer. @item pointm @cindex window point internals This is the value of point in the current buffer when this window is selected; when it is not selected, it retains its previous value. @item start The position in the buffer that is the first character to be displayed in the window. @item force_start If this flag is non-@code{nil}, it says that the window has been scrolled explicitly by the Lisp program. This affects what the next redisplay does if point is off the screen: instead of scrolling the window to show the text around point, it moves point to a location that is on the screen. @item last_modified The @code{modified} field of the window's buffer, as of the last time a redisplay completed in this window. @item last_point The buffer's value of point, as of the last time a redisplay completed in this window. @item left This is the left-hand edge of the window, measured in columns. (The leftmost column on the screen is @w{column 0}.) @item top This is the top edge of the window, measured in lines. (The top line on the screen is @w{line 0}.) @item height The height of the window, measured in lines. @item width The width of the window, measured in columns. @item next This is the window that is the next in the chain of siblings. It is @code{nil} in a window that is the rightmost or bottommost of a group of siblings. @item prev This is the window that is the previous in the chain of siblings. It is @code{nil} in a window that is the leftmost or topmost of a group of siblings. @item parent Internally, XEmacs arranges windows in a tree; each group of siblings has a parent window whose area includes all the siblings. This field points to a window's parent. Parent windows do not display buffers, and play little role in display except to shape their child windows. Emacs Lisp programs usually have no access to the parent windows; they operate on the windows at the leaves of the tree, which actually display buffers. @item hscroll This is the number of columns that the display in the window is scrolled horizontally to the left. Normally, this is 0. @item use_time This is the last time that the window was selected. The function @code{get-lru-window} uses this field. @item display_table The window's display table, or @code{nil} if none is specified for it. @item update_mode_line Non-@code{nil} means this window's mode line needs to be updated. @item base_line_number The line number of a certain position in the buffer, or @code{nil}. This is used for displaying the line number of point in the mode line. @item base_line_pos The position in the buffer for which the line number is known, or @code{nil} meaning none is known. @item region_showing If the region (or part of it) is highlighted in this window, this field holds the mark position that made one end of that region. Otherwise, this field is @code{nil}. @end table @node Modules for the Basic Displayable Lisp Objects, , The Window Object, Consoles; Devices; Frames; Windows @section Modules for the Basic Displayable Lisp Objects @cindex modules for the basic displayable Lisp objects @cindex displayable Lisp objects, modules for the basic @cindex Lisp objects, modules for the basic displayable @cindex objects, modules for the basic displayable Lisp @example @file{console-msw.c} @file{console-msw.h} @file{console-stream.c} @file{console-stream.h} @file{console-tty.c} @file{console-tty.h} @file{console-x.c} @file{console-x.h} @file{console.c} @file{console.h} @end example These modules implement the @dfn{console} Lisp object type. A console contains multiple display devices, but only one keyboard and mouse. Most of the time, a console will contain exactly one device. Consoles are the top of a lisp object inclusion hierarchy. Consoles contain devices, which contain frames, which contain windows. @example @file{device-msw.c} @file{device-tty.c} @file{device-x.c} @file{device.c} @file{device.h} @end example These modules implement the @dfn{device} Lisp object type. This abstracts a particular screen or connection on which frames are displayed. As with Lisp objects, event interfaces, and other subsystems, the device code is separated into a generic component that contains a standardized interface (in the form of a set of methods) onto particular device types. The device subsystem defines all the methods and provides method services for not only device operations but also for the frame, window, menubar, scrollbar, toolbar, and other displayable-object subsystems. The reason for this is that all of these subsystems have the same subtypes (X, TTY, NeXTstep, Microsoft Windows, etc.) as devices do. @example @file{frame-msw.c} @file{frame-tty.c} @file{frame-x.c} @file{frame.c} @file{frame.h} @end example Each device contains one or more frames in which objects (e.g. text) are displayed. A frame corresponds to a window in the window system; usually this is a top-level window but it could potentially be one of a number of overlapping child windows within a top-level window, using the MDI (Multiple Document Interface) protocol in Microsoft Windows or a similar scheme. The @file{frame-*} files implement the @dfn{frame} Lisp object type and provide the generic and device-type-specific operations on frames (e.g. raising, lowering, resizing, moving, etc.). @example @file{window.c} @file{window.h} @end example @cindex window (in Emacs) @cindex pane Each frame consists of one or more non-overlapping @dfn{windows} (better known as @dfn{panes} in standard window-system terminology) in which a buffer's text can be displayed. Windows can also have scrollbars displayed around their edges. @file{window.c} and @file{window.h} implement the @dfn{window} Lisp object type and provide code to manage windows. Since windows have no associated resources in the window system (the window system knows only about the frame; no child windows or anything are used for XEmacs windows), there is no device-type-specific code here; all of that code is part of the redisplay mechanism or the code for particular object types such as scrollbars. @node The Redisplay Mechanism, Extents, Consoles; Devices; Frames; Windows, Top @chapter The Redisplay Mechanism @cindex redisplay mechanism, the The redisplay mechanism is one of the most complicated sections of XEmacs, especially from a conceptual standpoint. This is doubly so because, unlike for the basic aspects of the Lisp interpreter, the computer science theories of how to efficiently handle redisplay are not well-developed. When working with the redisplay mechanism, remember the Golden Rules of Redisplay: @enumerate @item It Is Better To Be Correct Than Fast. @item Thou Shalt Not Run Elisp From Within Redisplay. @item It Is Better To Be Fast Than Not To Be. @end enumerate @menu * Critical Redisplay Sections:: * Line Start Cache:: * Redisplay Piece by Piece:: * Modules for the Redisplay Mechanism:: * Modules for other Display-Related Lisp Objects:: @end menu @node Critical Redisplay Sections, Line Start Cache, The Redisplay Mechanism, The Redisplay Mechanism @section Critical Redisplay Sections @cindex redisplay sections, critical @cindex critical redisplay sections @strong{The following paragraphs are way out-of-date and inaccurate.} Within this section, we are defenseless and assume that the following cannot happen: @enumerate @item garbage collection @item Lisp code evaluation @item frame size changes @end enumerate We ensure (3) by calling @code{hold_frame_size_changes()}, which will cause any pending frame size changes to get put on hold till after the end of the critical section. (1) follows automatically if (2) is met. #### Unfortunately, there are some places where Lisp code can be called within this section. We need to remove them. If @code{Fsignal()} is called during this critical section, we will @code{abort()}. If garbage collection is called during this critical section, we simply return. #### We should abort instead. #### If a frame-size change does occur we should probably actually be preempting redisplay. @strong{Begin up-to-date stuff} @subsection Nasty Bugs due to Reentrancy in Redisplay Structures handling QUIT These are now fixed as of November 10, 2004. @subheading Crash -- reentrant @code{regenerate_window()} Here is a crash I (ben) just got -- November 9, 2004: It can sort of be reproduced by creating a bunch of frames, opening a bunch of large files (which may be fontlocking for awhile). and immediately start Alt-TAB-ing back and forth quickly and constantly scrolling up and down using the scrolling dial on your mouse. @example Fatal error: assertion failed, file c:\xemacs\build\src\redisplay.c, line 5532, !dy->locked C backtrace: assert_failed(const char * 0x012a4ff0 `string', int 5532, const char * 0x0127bea4 `string') line 3839 Dynarr_verify_mod_1(void * 0x023ad2b0, const char * 0x012a4ff0 `string', int 5532) line 1306 + 36 bytes regenerate_window(window * 0x02f2ca88, long 40372, long 40372, int 2) line 5532 + 25 bytes update_line_start_cache(window * 0x02f2ca88, long 40372, long 40401, long 40372, int 0) line 8543 + 19 bytes point_in_line_start_cache(window * 0x02f2ca88, long 40372, int 0) line 7850 + 23 bytes start_end_of_last_line(window * 0x02f2ca88, long 40372, int 1, int 1) line 8121 + 15 bytes end_of_last_line_may_error(window * 0x02f2ca88, long 40372) line 8203 + 17 bytes pixel_to_glyph_translation(frame * 0x02f2c900, int 291, int 317, int * 0x0082bb04, int * 0x0082bb00, int * 0x0082bafc, int * 0x0082baf8, window * * 0x0082bae8, long * 0x0082baf4, long * 0x0082baf0, long * 0x0082baec, long * 0x0082bb10, long * 0x0082bb0c) line 9336 + 32 bytes mswindows_handle_mousewheel_event(long 49465600, int 0, int -240, tagPOINTS @{...@}) line 360 + 82 bytes mswindows_wnd_proc(HWND__ * 0x00260a42, unsigned int 522, unsigned int 4279238656, long 29885130) line 3561 + 36 bytes intercepted_wnd_proc(HWND__ * 0x00260a42, unsigned int 522, unsigned int 4279238656, long 29885130) line 2376 USER32! 77e11ef0() USER32! 77e1204c() USER32! 77e121af() mswindows_drain_windows_queue(int 0) line 1330 + 9 bytes emacs_mswindows_drain_queue() line 1339 + 7 bytes event_stream_drain_queue() line 1785 event_stream_quit_p() line 1893 check_quit() line 938 check_what_happened() line 459 internal_equal(long 22180468, long 22180468, int 0) line 2823 + 14 bytes update_image_instance(long 83498640, long 22180468) line 2121 + 18 bytes image_instantiate(long 21418616, long 20663624, long 54932896, long 22180468, long 3) line 3403 + 13 bytes va_call_trapping_problems_1(void * 0x0082cf94) line 5220 + 221 bytes call_trapping_problems_2(long 83160440) line 4867 + 13 bytes call_with_condition_handler(long (long, long, long)* 0x010cc4c0 flagged_a_squirmer(long, long, long), long 83160440, long (long)* 0x010cc440 call_trapping_problems_2(long), long 83160440) line 2129 + 7 bytes call_trapping_problems_1(long 83160440) line 4874 + 23 bytes internal_catch(long 21399864, long (long)* 0x010cc490 call_trapping_problems_1(long), long 83160440, int * volatile 0x0082ce4c, long * volatile 0x0082ce54) line 1527 + 7 bytes call_trapping_problems(long 20908160, const char * 0x00000000, int 98315, call_trapping_problems_result * 0x00000000, long (void *)* 0x010cca30 va_call_trapping_problems_1(void *), void * 0x0082cf94) line 5147 + 32 bytes call_with_suspended_errors(long (void)* 0x011448c0 image_instantiate(long, long, long, long, long), long 20663624, long 20908160, _error_behavior_struct_ @{...@}, int 5) line 5314 + 26 bytes specifier_instance_from_inst_list(long 21418616, long 20663624, long 54932896, long 21673760, _error_behavior_struct_ @{...@}, int 1, long 3) line 2501 + 54 bytes specifier_instance(long 21418616, long 20663624, long 54932896, _error_behavior_struct_ @{...@}, int 1, int 0, long 3) line 2614 + 64 bytes glyph_image_instance(long 22692176, long 54932896, _error_behavior_struct_ @{...@}, int 1) line 3955 + 31 bytes add_glyph_rune(position_redisplay_data_type * 0x0082d52c, glyph_block * 0x0082d454, int 0, int 0, glyph_cachel * 0x04f4e518) line 1972 + 26 bytes create_text_block(window * 0x034635a0, display_line * 0x033bfb28, long 29860, prop_block_dynarr * * 0x0082d7b8, int 2) line 2827 + 30 bytes generate_display_line(window * 0x034635a0, display_line * 0x033bfb28, int 1, long 29860, prop_block_dynarr * * 0x0082d7b8, int 2) line 979 + 38 bytes regenerate_window(window * 0x034635a0, long 29860, long 25012, int 2) line 5607 + 30 bytes update_line_start_cache(window * 0x034635a0, long 25012, long 28767, long 25012, int 0) line 8614 + 19 bytes point_in_line_start_cache(window * 0x034635a0, long 25012, int 0) line 7850 + 23 bytes start_end_of_last_line(window * 0x034635a0, long 25012, int 1, int 0) line 8121 + 15 bytes end_of_last_line(window * 0x034635a0, long 25012) line 8197 + 17 bytes Fwindow_end(long 54932896, long 20926544) line 1848 + 13 bytes Ffuncall(int 3, long * 0x0082dbb8) line 3841 + 93 bytes execute_optimized_program(const unsigned char * 0x032ceee8, int 7, long * 0x03289f40) line 823 + 16 bytes funcall_compiled_function(long 52991916, int 1, long * 0x0082dfb0) line 3454 + 85 bytes Ffuncall(int 2, long * 0x0082dfac) line 3880 + 17 bytes execute_optimized_program(const unsigned char * 0x02f667d8, int 6, long * 0x01558748) line 823 + 16 bytes funcall_compiled_function(long 22579576, int 3, long * 0x0082e3ac) line 3454 + 85 bytes Ffuncall(int 4, long * 0x0082e3a8) line 3880 + 17 bytes execute_optimized_program(const unsigned char * 0x03209c98, int 5, long * 0x03288c68) line 823 + 16 bytes funcall_compiled_function(long 51656320, int 1, long * 0x0082e7a4) line 3454 + 85 bytes Ffuncall(int 2, long * 0x0082e7a0) line 3880 + 17 bytes execute_optimized_program(const unsigned char * 0x0082e9ec, int 4, long * 0x03224990) line 823 + 16 bytes Fbyte_code(long 37927380, long 52578688, long 9) line 2564 + 70 bytes Feval(long 51505420) line 3601 + 187 bytes internal_catch(long 51959412, long (long)* 0x010c6f40 Feval(long), long 51505420, int * volatile 0x00000000, long * volatile 0x00000000) line 1527 + 7 bytes execute_rare_opcode(long * 0x0082eee8, const unsigned char * 0x03248365, Opcode Bcatch) line 1380 + 24 bytes execute_optimized_program(const unsigned char * 0x03248340, int 2, long * 0x02f3c0a0) line 715 + 17 bytes funcall_compiled_function(long 51656276, int 0, long * 0x0082f444) line 3454 + 85 bytes Ffuncall(int 1, long * 0x0082f440) line 3880 + 17 bytes run_hook_with_args_in_buffer(buffer * 0x04ee9060, int 1, long * 0x0082f440, run_hooks_condition RUN_HOOKS_TO_COMPLETION) line 4361 + 13 bytes run_hook_with_args(int 1, long * 0x0082f440, run_hooks_condition RUN_HOOKS_TO_COMPLETION) line 4374 + 23 bytes run_hook(long 51959028) line 4443 + 13 bytes safe_run_hook_trapping_problems_1(void * 0x013c73c0) line 5517 + 9 bytes call_trapping_problems_2(long 83157920) line 4867 + 13 bytes call_with_condition_handler(long (long, long, long)* 0x010cc4c0 flagged_a_squirmer(long, long, long), long 83157920, long (long)* 0x010cc440 call_trapping_problems_2(long), long 83157920) line 2129 + 7 bytes call_trapping_problems_1(long 83157920) line 4874 + 23 bytes internal_catch(long 21399864, long (long)* 0x010cc490 call_trapping_problems_1(long), long 83157920, int * volatile 0x0082f700, long * volatile 0x0082f708) line 1527 + 7 bytes call_trapping_problems(long 20925944, const char * 0x00000000, int 131235, call_trapping_problems_result * 0x0082f830, long (void *)* 0x010cd990 safe_run_hook_trapping_problems_1(void *), void * 0x013c73c0) line 5147 + 32 bytes safe_run_hook_trapping_problems(long 20741312, long 20739008, int 160) line 5543 + 36 bytes run_pre_idle_hook() line 2084 + 24 bytes redisplay() line 7224 Fnext_event(long 37363732, long 20928056) line 2263 Fcommand_loop_1() line 600 + 15 bytes command_loop_1(long 20928056) line 512 condition_case_1(long 20925944, long (long)* 0x01096a80 command_loop_1(long), long 20928056, long (long, long)* 0x01096630 cmd_error(long, long), long 20928056) line 1918 + 7 bytes command_loop_3() line 262 + 35 bytes command_loop_2(long 20928056) line 277 internal_catch(long 20683712, long (long)* 0x010967a0 command_loop_2(long), long 20928056, int * volatile 0x00000000, long * volatile 0x00000000) line 1527 + 7 bytes initial_command_loop(long 20928056) line 313 + 28 bytes xemacs_21_5_b18_i586_pc_win32(int 1, unsigned short * * 0x0082fed0, unsigned short * * 0x00000000, int 0) line 2551 main(int 1, char * * 0x00e52610, char * * 0x00e52bb0) line 2992 mainCRTStartup() line 338 + 17 bytes KERNEL32! 7c59893d() Lisp backtrace: # (unwind-protect ...) # (unwind-protect ...) # (unwind-protect ...) # (unwind-protect ...) # (unwind-protect ...) # (unwind-protect ...) # (unwind-protect ...) # (unwind-protect ...) # (unwind-protect ...) # (catch #<INTERNAL OBJECT (XEmacs bug?) (opaque, size=0) 0x1468938> ...) # (unwind-protect ...) # bind (inhibit-quit) window-end(#<window on "signal.c<2>" 0x5e4a> t) # (unwind-protect ...) # bind (buffer we-are-screwed check-text-props window) lazy-lock-fontify-window(#<window on "signal.c<2>" 0x5e4a>) # bind (walk-windows-current walk-windows-start arg which-devices which-frames minibuf function) walk-windows(lazy-lock-fontify-window no-minibuf #<mswindows-frame "emacs" 0x5 e49>) # (unwind-protect ...) # bind (ssf65112 tick frame) lazy-lock-maybe-fontify-frame(#<mswindows-frame "emacs" 0x5e49>) # bind (frame starting-frame) byte-code("..." [starting-frame frame selected-frame frame-visible-p frame-min ibuffer-only-p next-frame visible-nomini throw lazy-lock-frame-loop-done t lazy- lock-maybe-fontify-frame] 4) # (catch lazy-lock-frame-loop-done ...) lazy-lock-pre-idle-fontify-windows() # (unwind-protect ...) # (catch #<INTERNAL OBJECT (XEmacs bug?) (opaque, size=0) 0x1468938> ...) # (unwind-protect ...) # (unwind-protect ...) # bind (inhibit-quit) # (unwind-protect ...) # (unwind-protect ...) # bind (inhibit-quit) (next-event "[internal]") # (condition-case ... . error) # (catch top-level ...) @end example @subsubheading Another Lisp trace of a similar situation (C stack trace not available): @example Fatal error: assertion failed, file c:\xemacs\build\src\redisplay.c, line 5532, !dy->locked Lisp backtrace follows: # (unwind-protect ...) # (unwind-protect ...) # (unwind-protect ...) # (unwind-protect ...) # (unwind-protect ...) # (unwind-protect ...) # (unwind-protect ...) scrollbar-page-down((#<window on "*grep*" 0x1a5f9>)) (dispatch-event "[internal]") # (unwind-protect ...) # (catch #<INTERNAL OBJECT (XEmacs bug?) (opaque, size=0) 0x1468270> ...) # (unwind-protect ...) # (unwind-protect ...) # (catch #<INTERNAL OBJECT (XEmacs bug?) (opaque, size=0) 0x1468270> ...) # (unwind-protect ...) # bind (inhibit-quit) window-end(#<window on "*grep*" 0x1a5f9> t) # (unwind-protect ...) # bind (buffer we-are-screwed check-text-props window) lazy-lock-fontify-window(#<window on "*grep*" 0x1a5f9>) # bind (walk-windows-current walk-windows-start arg which-devices which-frames minibuf function) walk-windows(lazy-lock-fontify-window no-minibuf #<mswindows-frame "emacs" 0x1 9f64>) # (unwind-protect ...) # bind (ssf65112 tick frame) lazy-lock-maybe-fontify-frame(#<mswindows-frame "emacs" 0x19f64>) # bind (frame starting-frame) byte-code("..." [starting-frame frame selected-frame frame-visible-p frame-min ibuffer-only-p next-frame visible-nomini throw lazy-lock-frame-loop-done t lazy- lock-maybe-fontify-frame] 4) # (catch lazy-lock-frame-loop-done ...) lazy-lock-pre-idle-fontify-windows() # (unwind-protect ...) # (catch #<INTERNAL OBJECT (XEmacs bug?) (opaque, size=0) 0x1468270> ...) # (unwind-protect ...) # (unwind-protect ...) # bind (inhibit-quit) # (unwind-protect ...) # (unwind-protect ...) # bind (inhibit-quit) (next-event "[internal]") # (condition-case ... . error) # (catch top-level ...) @end example @subheading Crash -- reentrant @code{generate_displayable_area()} Original code said [Tricky tricky tricky. @code{generate_displayable_area()} can (could) be called reentrantly, and redisplay is not prepared to handle this:]. assert_failed(const char * 0x0129c8c8 `string', int 5328, const char * 0x01274068 `string') line 3620 Dynarr_verify_mod_1(void * 0x0250f228, const char * 0x0129c8c8 `string', int 5328) line 1256 + 36 bytes generate_displayable_area(window * 0x02480028, long 38776292, int 0, int 0, int 265, int 169, display_line_dynarr * 0x0250f228, long 0, int 2) line 5328 + 25 bytes output_gutter(frame * 0x0228ad90, gutter_pos TOP_GUTTER, int 1) line 409 + 69 bytes redraw_exposed_gutter(frame * 0x0228ad90, gutter_pos TOP_GUTTER, int 8, int 23, int 249, int 127) line 687 + 15 bytes redraw_exposed_gutters(frame * 0x0228ad90, int 8, int 23, int 249, int 127) line 703 + 29 bytes mswindows_redraw_exposed_area(frame * 0x0228ad90, int 8, int 23, int 249, int 127) line 862 + 25 bytes mswindows_handle_paint(frame * 0x0228ad90) line 2176 + 25 bytes mswindows_wnd_proc(HWND__ * 0x001003e2, unsigned int 15, unsigned int 0, long 0) line 3233 + 45 bytes intercepted_wnd_proc(HWND__ * 0x001003e2, unsigned int 15, unsigned int 0, long 0) line 2488 USER32! 77e3a244() USER32! 77e14730() USER32! 77e1558a() NTDLL! KiUserCallbackDispatcher@@12 + 19 bytes USER32! 77e14680() USER32! 77e1a792() qxeIsDialogMessage(HWND__ * 0x001003e2, tagMSG * 0x0082a93c @{msg=0x0000000f wp=0x00000000 lp=0x00000000@}) line 2298 + 14 bytes mswindows_is_dialog_msg(tagMSG * 0x0082a93c @{msg=0x0000000f wp=0x00000000 lp=0x00000000@}) line 165 + 13 bytes mswindows_drain_windows_queue(int 0) line 1282 + 9 bytes emacs_mswindows_drain_queue() line 1326 + 7 bytes event_stream_drain_queue() line 1887 event_stream_quit_p() line 1992 check_quit() line 993 unbind_to_hairy(int 35) line 5963 unbind_to_1(int 35, long 20888208) line 5945 + 200 bytes specifier_instance_from_inst_list(long 21379344, long 38135616, long 36220304, long 20888208, _error_behavior_struct_ @{...@}, int 1, long 3) line 2522 + 16 bytes specifier_instance(long 21379344, long 38135616, long 36220304, _error_behavior_struct_ @{...@}, int 1, int 0, long 3) line 2625 + 65 bytes specifier_instance_no_quit(long 21379344, long 38135616, long 36220304, _error_behavior_struct_ @{...@}, int 0, long 1) line 2658 + 31 bytes face_property_matching_instance(long 22612340, long 20860632, long 22530956, long 36220304, _error_behavior_struct_ @{...@}, int 0, long 1) line 565 + 48 bytes ensure_face_cachel_contains_charset(face_cachel * 0x0082b014, long 36220304, long 22530956) line 1104 + 35 bytes update_face_cachel_data(face_cachel * 0x0082b014, long 36220304, long 22612340) line 1304 + 19 bytes query_string_geometry(long 21110576, long 22612340, int * 0x00000000, int * 0x0082b5b4, int * 0x00000000, long 38852960) line 2370 + 23 bytes mswindows_widget_query_string_geometry(long 21110576, long 22612340, int * 0x0082b5b8, int * 0x0082b5b4, long 38852960) line 2914 + 25 bytes widget_query_string_geometry(long 21110576, long 22612340, int * 0x0082b5b8, int * 0x0082b5b4, long 38852960) line 514 + 32 bytes edit_field_query_geometry(long 38857648, int * 0x0082b7b4, int * 0x0082b7b8, image_instance_geometry IMAGE_DESIRED_GEOMETRY, long 38852960) line 920 + 390 bytes widget_query_geometry(long 38857648, int * 0x0082b7b4, int * 0x0082b7b8, image_instance_geometry IMAGE_DESIRED_GEOMETRY, long 38852960) line 567 + 26 bytes image_instance_query_geometry(long 38857648, int * 0x0082b7b4, int * 0x0082b7b8, image_instance_geometry IMAGE_DESIRED_GEOMETRY, long 38852960) line 2015 + 26 bytes glyph_query_geometry(long 38853384, int * 0x0082b7b4, int * 0x0082b7b8, image_instance_geometry IMAGE_DESIRED_GEOMETRY, long 38852960) line 4197 + 25 bytes layout_query_geometry(long 38852960, int * 0x0082b9cc, int * 0x0082b9d0, image_instance_geometry IMAGE_DESIRED_GEOMETRY, long 38404624) line 1351 + 25 bytes widget_query_geometry(long 38852960, int * 0x0082b9cc, int * 0x0082b9d0, image_instance_geometry IMAGE_DESIRED_GEOMETRY, long 38404624) line 567 + 26 bytes image_instance_query_geometry(long 38852960, int * 0x0082b9cc, int * 0x0082b9d0, image_instance_geometry IMAGE_DESIRED_GEOMETRY, long 38404624) line 2015 + 26 bytes glyph_query_geometry(long 38537976, int * 0x0082b9cc, int * 0x0082b9d0, image_instance_geometry IMAGE_DESIRED_GEOMETRY, long 38404624) line 4197 + 25 bytes layout_layout(long 38404624, int 265, int 156, int -2, int -2, long 38273064) line 1468 + 23 bytes widget_layout(long 38404624, int 265, int 156, int -2, int -2, long 38273064) line 626 + 30 bytes image_instance_layout(long 38404624, int 265, int 156, int -2, int -2, long 38273064) line 2102 + 51 bytes glyph_ascent(long 38404624, long 38273064) line 4009 + 21 bytes update_glyph_cachel_data(window * 0x02480028, long 36201168, glyph_cachel * 0x0248c3d8) line 4272 + 13 bytes get_glyph_cachel_index(window * 0x02480028, long 36201168) line 4306 + 17 bytes add_glyph_rune(position_redisplay_data_type * 0x0082bf2c, glyph_block * 0x024bd028, int 0, int 0, glyph_cachel * 0x00000000) line 1800 + 15 bytes add_glyph_runes(position_redisplay_data_type * 0x0082bf2c, int 0) line 2085 + 31 bytes create_string_text_block(window * 0x02480028, long 38776292, display_line * 0x02514500, long 0, prop_block_dynarr * * 0x0082c13c, int 2) line 4907 + 14 bytes generate_string_display_line(window * 0x02480028, long 38776292, display_line * 0x02514500, long 0, prop_block_dynarr * * 0x0082c13c, int 2) line 5293 + 29 bytes generate_displayable_area(window * 0x02480028, long 38776292, int 0, int 0, int 265, int 169, display_line_dynarr * 0x0250f228, long 0, int 2) line 5372 + 29 bytes output_gutter(frame * 0x0228ad90, gutter_pos TOP_GUTTER, int 0) line 409 + 69 bytes update_frame_gutters(frame * 0x0228ad90) line 639 + 15 bytes redisplay_frame(frame * 0x0228ad90, int 1) line 6792 + 9 bytes redisplay_device(device * 0x0171df00, int 1) line 6911 + 11 bytes redisplay_without_hooks() line 6957 + 11 bytes redisplay_no_pre_idle_hook() line 7029 redisplay() line 7011 mswindows_wnd_proc(HWND__ * 0x001003e2, unsigned int 5, unsigned int 0, long 10223881) line 3424 intercepted_wnd_proc(HWND__ * 0x001003e2, unsigned int 5, unsigned int 0, long 10223881) line 2488 USER32! 77e3a244() USER32! 77e16362() USER32! 77e14c1a() USER32! 77e1dd30() mswindows_wnd_proc(HWND__ * 0x001003e2, unsigned int 71, unsigned int 0, long 8578308) line 3926 + 21 bytes intercepted_wnd_proc(HWND__ * 0x001003e2, unsigned int 71, unsigned int 0, long 8578308) line 2488 USER32! 77e3a244() USER32! 77e14730() USER32! 77e174b4() NTDLL! KiUserCallbackDispatcher@@12 + 19 bytes mswindows_set_frame_size(frame * 0x0228ad90, int 265, int 156) line 355 internal_set_frame_size(frame * 0x0228ad90, int 265, int 156, int 0) line 2754 + 24 bytes Fset_frame_displayable_pixel_size(long 36220304, long 531, long 313, long 20888208) line 3004 + 32 bytes Ffuncall(int 4, long * 0x0082e778) line 3844 + 168 bytes execute_optimized_program(const unsigned char * 0x02286e48, int 40, long * 0x01529b80) line 609 + 16 bytes funcall_compiled_function(long 22433308, int 0, long * 0x0082ec08) line 3452 + 85 bytes Ffuncall(int 1, long * 0x0082ec04) line 3883 + 17 bytes execute_optimized_program(const unsigned char * 0x02286d40, int 6, long * 0x01548ddc) line 609 + 16 bytes funcall_compiled_function(long 22505864, int 11, long * 0x0082f00c) line 3452 + 85 bytes Ffuncall(int 12, long * 0x0082f008) line 3883 + 17 bytes execute_optimized_program(const unsigned char * 0x02503e38, int 47, long * 0x0152dc48) line 609 + 16 bytes funcall_compiled_function(long 22436784, int 0, long * 0x0082f534) line 3452 + 85 bytes Ffuncall(int 1, long * 0x0082f530) line 3883 + 17 bytes apply1(long 22436784, long 20888208) line 4458 + 11 bytes Fcall_interactively(long 20742816, long 20888208, long 20888208) line 460 + 13 bytes Ffuncall(int 2, long * 0x0082f8ec) line 3844 + 127 bytes call1(long 20854392, long 20742816) line 4489 + 11 bytes execute_command_event(command_builder * 0x01798f98, long 24439276) line 4198 + 69 bytes Fdispatch_event(long 24439276) line 4569 + 13 bytes Fcommand_loop_1() line 569 + 9 bytes command_loop_1(long 20888208) line 489 condition_case_1(long 20886024, long (long)* 0x010955a0 command_loop_1(long), long 20888208, long (long, long)* 0x01095150 cmd_error(long, long), long 20888208) line 1917 + 7 bytes command_loop_3() line 251 + 35 bytes command_loop_2(long 20888208) line 264 internal_catch(long 20650992, long (long)* 0x010952c0 command_loop_2(long), long 20888208, int * volatile 0x00000000, long * volatile 0x00000000) line 1527 + 7 bytes initial_command_loop(long 20888208) line 300 + 28 bytes xemacs_21_5_b10_i586_pc_win32(int 1, char * * 0x00e52620, char * * 0x00e52bb0, int 0) line 2356 main(int 1, char * * 0x00e52620, char * * 0x00e52bb0) line 2733 mainCRTStartup() line 338 + 17 bytes KERNEL32! 77ea847c() @node Line Start Cache, Redisplay Piece by Piece, Critical Redisplay Sections, The Redisplay Mechanism @section Line Start Cache @cindex line start cache The traditional scrolling code in Emacs breaks in a variable height world. It depends on the key assumption that the number of lines that can be displayed at any given time is fixed. This led to a complete separation of the scrolling code from the redisplay code. In order to fully support variable height lines, the scrolling code must actually be tightly integrated with redisplay. Only redisplay can determine how many lines will be displayed on a screen for any given starting point. What is ideally wanted is a complete list of the starting buffer position for every possible display line of a buffer along with the height of that display line. Maintaining such a full list would be very expensive. We settle for having it include information for all areas which we happen to generate anyhow (i.e. the region currently being displayed) and for those areas we need to work with. In order to ensure that the cache accurately represents what redisplay would actually show, it is necessary to invalidate it in many situations. If the buffer changes, the starting positions may no longer be correct. If a face or an extent has changed then the line heights may have altered. These events happen frequently enough that the cache can end up being constantly disabled. With this potentially constant invalidation when is the cache ever useful? Even if the cache is invalidated before every single usage, it is necessary. Scrolling often requires knowledge about display lines which are actually above or below the visible region. The cache provides a convenient light-weight method of storing this information for multiple display regions. This knowledge is necessary for the scrolling code to always obey the First Golden Rule of Redisplay. If the cache already contains all of the information that the scrolling routines happen to need so that it doesn't have to go generate it, then we are able to obey the Third Golden Rule of Redisplay. The first thing we do to help out the cache is to always add the displayed region. This region had to be generated anyway, so the cache ends up getting the information basically for free. In those cases where a user is simply scrolling around viewing a buffer there is a high probability that this is sufficient to always provide the needed information. The second thing we can do is be smart about invalidating the cache. TODO---Be smart about invalidating the cache. Potential places: @itemize @bullet @item Insertions at end-of-line which don't cause line-wraps do not alter the starting positions of any display lines. These types of buffer modifications should not invalidate the cache. This is actually a large optimization for redisplay speed as well. @item Buffer modifications frequently only affect the display of lines at and below where they occur. In these situations we should only invalidate the part of the cache starting at where the modification occurs. @end itemize In case you're wondering, the Second Golden Rule of Redisplay is not applicable. @node Redisplay Piece by Piece, Modules for the Redisplay Mechanism, Line Start Cache, The Redisplay Mechanism @section Redisplay Piece by Piece @cindex redisplay piece by piece As you can begin to see redisplay is complex and also not well documented. Chuck no longer works on XEmacs so this section is my take on the workings of redisplay. Redisplay happens in three phases: @enumerate @item Determine desired display in area that needs redisplay. Implemented by @code{redisplay.c} @item Compare desired display with current display Implemented by @code{redisplay-output.c} @item Output changes Implemented by @code{redisplay-output.c}, @code{redisplay-x.c}, @code{redisplay-msw.c} and @code{redisplay-tty.c} @end enumerate Steps 1 and 2 are device-independent and relatively complex. Step 3 is mostly device-dependent. Determining the desired display Display attributes are stored in @code{display_line} structures. Each @code{display_line} consists of a set of @code{display_block}'s and each @code{display_block} contains a number of @code{rune}'s. Generally dynarr's of @code{display_line}'s are held by each window representing the current display and the desired display. The @code{display_line} structures are tightly tied to buffers which presents a problem for redisplay as this connection is bogus for the modeline. Hence the @code{display_line} generation routines are duplicated for generating the modeline. This means that the modeline display code has many bugs that the standard redisplay code does not. The guts of @code{display_line} generation are in @code{create_text_block}, which creates a single display line for the desired locale. This incrementally parses the characters on the current line and generates redisplay structures for each. Gutter redisplay is different. Because the data to display is stored in a string we cannot use @code{create_text_block}. Instead we use @code{create_text_string_block} which performs the same function as @code{create_text_block} but for strings. Many of the complexities of @code{create_text_block} to do with cursor handling and selective display have been removed. @node Modules for the Redisplay Mechanism, Modules for other Display-Related Lisp Objects, Redisplay Piece by Piece, The Redisplay Mechanism @section Modules for the Redisplay Mechanism @cindex modules for the redisplay mechanism @cindex redisplay mechanism, modules for the @example @file{redisplay-output.c} @file{redisplay-msw.c} @file{redisplay-tty.c} @file{redisplay-x.c} @file{redisplay.c} @file{redisplay.h} @end example These files provide the redisplay mechanism. As with many other subsystems in XEmacs, there is a clean separation between the general and device-specific support. @file{redisplay.c} contains the bulk of the redisplay engine. These functions update the redisplay structures (which describe how the screen is to appear) to reflect any changes made to the state of any displayable objects (buffer, frame, window, etc.) since the last time that redisplay was called. These functions are highly optimized to avoid doing more work than necessary (since redisplay is called extremely often and is potentially a huge time sink), and depend heavily on notifications from the objects themselves that changes have occurred, so that redisplay doesn't explicitly have to check each possible object. The redisplay mechanism also contains a great deal of caching to further speed things up; some of this caching is contained within the various displayable objects. @file{redisplay-output.c} goes through the redisplay structures and converts them into calls to device-specific methods to actually output the screen changes. @file{redisplay-x.c} and @file{redisplay-tty.c} are two implementations of these redisplay output methods, for X frames and TTY frames, respectively. @example @file{indent.c} @end example This module contains various functions and Lisp primitives for converting between buffer positions and screen positions. These functions call the redisplay mechanism to do most of the work, and then examine the redisplay structures to get the necessary information. This module needs work. @example @file{termcap.c} @file{terminfo.c} @file{tparam.c} @end example These files contain functions for working with the termcap (BSD-style) and terminfo (System V style) databases of terminal capabilities and escape sequences, used when XEmacs is displaying in a TTY. @example @file{cm.c} @file{cm.h} @end example These files provide some miscellaneous TTY-output functions and should probably be merged into @file{redisplay-tty.c}. @node Modules for other Display-Related Lisp Objects, , Modules for the Redisplay Mechanism, The Redisplay Mechanism @section Modules for other Display-Related Lisp Objects @cindex modules for other display-related Lisp objects @cindex display-related Lisp objects, modules for other @cindex Lisp objects, modules for other display-related @example @file{faces.c} @file{faces.h} @end example @example @file{bitmaps.h} @file{glyphs-eimage.c} @file{glyphs-msw.c} @file{glyphs-msw.h} @file{glyphs-widget.c} @file{glyphs-x.c} @file{glyphs-x.h} @file{glyphs.c} @file{glyphs.h} @end example @example @file{objects-msw.c} @file{objects-msw.h} @file{objects-tty.c} @file{objects-tty.h} @file{objects-x.c} @file{objects-x.h} @file{objects.c} @file{objects.h} @end example @example @file{menubar-msw.c} @file{menubar-msw.h} @file{menubar-x.c} @file{menubar.c} @file{menubar.h} @end example @example @file{scrollbar-msw.c} @file{scrollbar-msw.h} @file{scrollbar-x.c} @file{scrollbar-x.h} @file{scrollbar.c} @file{scrollbar.h} @end example @example @file{toolbar-msw.c} @file{toolbar-x.c} @file{toolbar.c} @file{toolbar.h} @end example @example @file{font-lock.c} @end example This file provides C support for syntax highlighting---i.e. highlighting different syntactic constructs of a source file in different colors, for easy reading. The C support is provided so that this is fast. @example @file{dgif_lib.c} @file{gif_err.c} @file{gif_lib.h} @file{gifalloc.c} @end example These modules decode GIF-format image files, for use with glyphs. These files were removed due to Unisys patent infringement concerns. @node Extents, Faces, The Redisplay Mechanism, Top @chapter Extents @cindex extents @menu * Introduction to Extents:: Extents are ranges over text, with properties. * Extent Ordering:: How extents are ordered internally. * Format of the Extent Info:: The extent information in a buffer or string. * Zero-Length Extents:: A weird special case. * Mathematics of Extent Ordering:: A rigorous foundation. * Extent Fragments:: Cached information useful for redisplay. @end menu @node Introduction to Extents, Extent Ordering, Extents, Extents @section Introduction to Extents @cindex extents, introduction to Extents are regions over a buffer, with a start and an end position denoting the region of the buffer included in the extent. In addition, either end can be closed or open, meaning that the endpoint is or is not logically included in the extent. Insertion of a character at a closed endpoint causes the character to go inside the extent; insertion at an open endpoint causes the character to go outside. Extent endpoints are stored using memory indices (see @file{insdel.c}), to minimize the amount of adjusting that needs to be done when characters are inserted or deleted. (Formerly, extent endpoints at the gap could be either before or after the gap, depending on the open/closedness of the endpoint. The intent of this was to make it so that insertions would automatically go inside or out of extents as necessary with no further work needing to be done. It didn't work out that way, however, and just ended up complexifying and buggifying all the rest of the code.) @node Extent Ordering, Format of the Extent Info, Introduction to Extents, Extents @section Extent Ordering @cindex extent ordering Extents are compared using memory indices. There are two orderings for extents and both orders are kept current at all times. The normal or @dfn{display} order is as follows: @example Extent A is ``less than'' extent B, that is, earlier in the display order, if: A-start < B-start, or if: A-start = B-start, and A-end > B-end @end example So if two extents begin at the same position, the larger of them is the earlier one in the display order (@code{EXTENT_LESS} is true). For the e-order, the same thing holds: @example Extent A is ``less than'' extent B in e-order, that is, later in the buffer, if: A-end < B-end, or if: A-end = B-end, and A-start > B-start @end example So if two extents end at the same position, the smaller of them is the earlier one in the e-order (@code{EXTENT_E_LESS} is true). The display order and the e-order are complementary orders: any theorem about the display order also applies to the e-order if you swap all occurrences of ``display order'' and ``e-order'', ``less than'' and ``greater than'', and ``extent start'' and ``extent end''. @node Format of the Extent Info, Zero-Length Extents, Extent Ordering, Extents @section Format of the Extent Info @cindex extent info, format of the An extent-info structure consists of a list of the buffer or string's extents and a @dfn{stack of extents} that lists all of the extents over a particular position. The stack-of-extents info is used for optimization purposes---it basically caches some info that might be expensive to compute. Certain otherwise hard computations are easy given the stack of extents over a particular position, and if the stack of extents over a nearby position is known (because it was calculated at some prior point in time), it's easy to move the stack of extents to the proper position. Given that the stack of extents is an optimization, and given that it requires memory, a string's stack of extents is wiped out each time a garbage collection occurs. Therefore, any time you retrieve the stack of extents, it might not be there. If you need it to be there, use the @code{_force} version. Similarly, a string may or may not have an extent_info structure. (Generally it won't if there haven't been any extents added to the string.) So use the @code{_force} version if you need the extent_info structure to be there. A list of extents is maintained as a double gap array. One gap array is ordered by start index (the @dfn{display order}) and the other is ordered by end index (the @dfn{e-order}). Note that positions in an extent list should logically be conceived of as referring @emph{to} a particular extent (as is the norm in programs) rather than sitting between two extents. Note also that callers of these functions should not be aware of the fact that the extent list is implemented as an array, except for the fact that positions are integers (this should be generalized to handle integers and linked list equally well). A gap array is the same structure used by buffer text: an array of elements with a "gap" somewhere in the middle. Insertion and deletion happens by moving the gap to the insertion/deletion point, and then expanding/contracting as necessary. Gap arrays have a number of useful properties: @enumerate @item They are space efficient, as there is no need for next/previous pointers. @item If the items in them are sorted, locating an item is fast -- @math{O(log N)}. @item Insertion and deletion is very fast (constant time, essentially) if the gap is near (which favors localized operations, as will usually be the case). Even if not, it requires only a block move of memory, which is generally a highly optimized operation on modern processors. @item Code to manipulate them is relatively simple to write. @end enumerate An alternative would be balanced binary trees, which have guaranteed @math{O(log N)} time for all operations (although the constant factors are not as good, and repeated localized operations will be slower than for a gap array). Such code is quite tricky to write, however. @node Zero-Length Extents, Mathematics of Extent Ordering, Format of the Extent Info, Extents @section Zero-Length Extents @cindex zero-length extents @cindex extents, zero-length Extents can be zero-length, and will end up that way if their endpoints are explicitly set that way or if their detachable property is @code{nil} and all the text in the extent is deleted. (The exception is open-open zero-length extents, which are barred from existing because there is no sensible way to define their properties. Deletion of the text in an open-open extent causes it to be converted into a closed-open extent.) Zero-length extents are primarily used to represent annotations, and behave as follows: @enumerate @item Insertion at the position of a zero-length extent expands the extent if both endpoints are closed; goes after the extent if it is closed-open; and goes before the extent if it is open-closed. @item Deletion of a character on a side of a zero-length extent whose corresponding endpoint is closed causes the extent to be detached if it is detachable; if the extent is not detachable or the corresponding endpoint is open, the extent remains in the buffer, moving as necessary. @end enumerate Note that closed-open, non-detachable zero-length extents behave exactly like markers and that open-closed, non-detachable zero-length extents behave like the ``point-type'' marker in Mule. @node Mathematics of Extent Ordering, Extent Fragments, Zero-Length Extents, Extents @section Mathematics of Extent Ordering @cindex mathematics of extent ordering @cindex extent mathematics @cindex extent ordering @cindex display order of extents @cindex extents, display order The extents in a buffer are ordered by ``display order'' because that is that order that the redisplay mechanism needs to process them in. The e-order is an auxiliary ordering used to facilitate operations over extents. The operations that can be performed on the ordered list of extents in a buffer are @enumerate @item Locate where an extent would go if inserted into the list. @item Insert an extent into the list. @item Remove an extent from the list. @item Map over all the extents that overlap a range. @end enumerate (4) requires being able to determine the first and last extents that overlap a range. NOTE: @dfn{overlap} is used as follows: @itemize @bullet @item two ranges overlap if they have at least one point in common. Whether the endpoints are open or closed makes a difference here. @item a point overlaps a range if the point is contained within the range; this is equivalent to treating a point @math{P} as the range @math{[P, P]}. @item In the case of an @emph{extent} overlapping a point or range, the extent is normally treated as having closed endpoints. This applies consistently in the discussion of stacks of extents and such below. Note that this definition of overlap is not necessarily consistent with the extents that @code{map-extents} maps over, since @code{map-extents} sometimes pays attention to whether the endpoints of an extents are open or closed. But for our purposes, it greatly simplifies things to treat all extents as having closed endpoints. @end itemize First, define @math{>}, @math{<}, @math{<=}, etc. as applied to extents to mean comparison according to the display order. Comparison between an extent @math{E} and an index @math{I} means comparison between @math{E} and the range @math{[I, I]}. Also define @math{e>}, @math{e<}, @math{e<=}, etc. to mean comparison according to the e-order. For any range @math{R}, define @math{R(0)} to be the starting index of the range and @math{R(1)} to be the ending index of the range. For any extent @math{E}, define @math{E(next)} to be the extent directly following @math{E}, and @math{E(prev)} to be the extent directly preceding @math{E}. Assume @math{E(next)} and @math{E(prev)} can be determined from @math{E} in constant time. (This is because we store the extent list as a doubly linked list.) Similarly, define @math{E(e-next)} and @math{E(e-prev)} to be the extents directly following and preceding @math{E} in the e-order. Now: Let @math{R} be a range. Let @math{F} be the first extent overlapping @math{R}. Let @math{L} be the last extent overlapping @math{R}. Theorem 1: @math{R(1)} lies between @math{L} and @math{L(next)}, i.e. @math{L <= R(1) < L(next)}. This follows easily from the definition of display order. The basic reason that this theorem applies is that the display order sorts by increasing starting index. Therefore, we can determine @math{L} just by looking at where we would insert @math{R(1)} into the list, and if we know @math{F} and are moving forward over extents, we can easily determine when we've hit @math{L} by comparing the extent we're at to @math{R(1)}. @example Theorem 2: @math{F(e-prev) e< [1, R(0)] e<= F}. @end example This is the analog of Theorem 1, and applies because the e-order sorts by increasing ending index. Therefore, @math{F} can be found in the same amount of time as operation (1), i.e. the time that it takes to locate where an extent would go if inserted into the e-order list. This is @math{O(log N)}, since we are using gap arrays to manage extents. Define a @dfn{stack of extents} (or @dfn{SOE}) as the set of extents (ordered in display order and e-order, just like for normal extent lists) that overlap an index @math{I}. Now: Let @math{I} be an index, let @math{S} be the stack of extents on @math{I} and let @math{F} be the first extent in @math{S}. Theorem 3: The first extent in @math{S} is the first extent that overlaps any range @math{[I, J]}. Proof: Any extent that overlaps @math{[I, J]} but does not include @math{I} must have a start index @math{> I}, and thus be greater than any extent in @math{S}. Therefore, finding the first extent that overlaps a range @math{R} is the same as finding the first extent that overlaps @math{R(0)}. Theorem 4: Let @math{I2} be an index such that @math{I2 > I}, and let @math{F2} be the first extent that overlaps @math{I2}. Then, either @math{F2} is in @math{S} or @math{F2} is greater than any extent in @math{S}. Proof: If @math{F2} does not include @math{I} then its start index is greater than @math{I} and thus it is greater than any extent in @math{S}, including @math{F}. Otherwise, @math{F2} includes @math{I} and thus is in @math{S}, and thus @math{F2 >= F}. @node Extent Fragments, , Mathematics of Extent Ordering, Extents @section Extent Fragments @cindex extent fragments @cindex fragments, extent Imagine that the buffer is divided up into contiguous, non-overlapping @dfn{runs} of text such that no extent starts or ends within a run (extents that abut the run don't count). An extent fragment is a structure that holds data about the run that contains a particular buffer position (if the buffer position is at the junction of two runs, the run after the position is used)---the beginning and end of the run, a list of all of the extents in that run, the @dfn{merged face} that results from merging all of the faces corresponding to those extents, the begin and end glyphs at the beginning of the run, etc. This is the information that redisplay needs in order to display this run. Extent fragments have to be very quick to update to a new buffer position when moving linearly through the buffer. They rely on the stack-of-extents code, which does the heavy-duty algorithmic work of determining which extents overly a particular position. @node Faces, Glyphs, Extents, Top @chapter Faces @cindex faces Not yet documented. @node Glyphs, Specifiers, Faces, Top @chapter Glyphs @cindex glyphs Glyphs are graphical elements that can be displayed in XEmacs buffers or gutters. We use the term graphical element here in the broadest possible sense since glyphs can be as mundane as text or as arcane as a native tab widget. In XEmacs, glyphs represent the uninstantiated state of graphical elements, i.e. they hold all the information necessary to produce an image on-screen but the image need not exist at this stage, and multiple screen images can be instantiated from a single glyph. @c #### find a place for this discussion @c The decision to make image specifiers a separate type is debatable. @c In fact, the design decision to create a separate image specifier @c type, rather than make glyphs themselves be specifiers, is @c debatable---the other properties of glyphs are rarely used and could @c conceivably have been incorporated into the glyph's instantiator. @c The rarely used glyph types (buffer, pointer, icon) could also have @c been incorporated into the instantiator. Glyphs are lazily instantiated by calling one of the glyph functions. This usually occurs within redisplay when @code{Fglyph_height} is called. Instantiation causes an image-instance to be created and cached. This cache is on a per-device basis for all glyphs except widget-glyphs, and on a per-window basis for widgets-glyphs. The caching is done by @code{image_instantiate} and is necessary because it is generally possible to display an image-instance in multiple domains. For instance if we create a Pixmap, we can actually display this on multiple windows - even though we only need a single Pixmap instance to do this. If caching wasn't done then it would be necessary to create image-instances for every displayable occurrence of a glyph - and every usage - and this would be extremely memory and cpu intensive. Widget-glyphs (a.k.a native widgets) are not cached in this way. This is because widget-glyph image-instances on screen are toolkit windows, and thus cannot be reused in multiple XEmacs domains. Thus widget-glyphs are cached on an XEmacs window basis. Any action on a glyph first consults the cache before actually instantiating a widget. @section Glyph Instantiation @cindex glyph instantiation @cindex instantiation, glyph Glyph instantiation is a hairy topic and requires some explanation. The guts of glyph instantiation is contained within @code{image_instantiate}. A glyph contains an image which is a specifier. When a glyph function - for instance @code{Fglyph_height} - asks for a property of the glyph that can only be determined from its instantiated state, then the glyph image is instantiated and an image instance created. The instantiation process is governed by the specifier code and goes through a series of steps: @itemize @bullet @item Validation. Instantiation of image instances happens dynamically - often within the guts of redisplay. Thus it is often not feasible to catch instantiator errors at instantiation time. Instead the instantiator is validated at the time it is added to the image specifier. This function is defined by @code{image_validate} and at a simple level validates keyword value pairs. @item Duplication. The specifier code by default takes a copy of the instantiator. This is reasonable for most specifiers but in the case of widget-glyphs can be problematic, since some of the properties in the instantiator - for instance callbacks - could cause infinite recursion in the copying process. Thus the image code defines a function - @code{image_copy_instantiator} - which will selectively copy values. This is controlled by the way that a keyword is defined either using @code{IIFORMAT_VALID_KEYWORD} or @code{IIFORMAT_VALID_NONCOPY_KEYWORD}. Note that the image caching and redisplay code relies on instantiator copying to ensure that current and new instantiators are actually different rather than referring to the same thing. @item Normalization. Once the instantiator has been copied it must be converted into a form that is viable at instantiation time. This can involve no changes at all, but typically involves things like converting file names to the actual data. This function is defined by @code{image_going_to_add} and @code{normalize_image_instantiator}. @item Instantiation. When an image instance is actually required for display it is instantiated using @code{image_instantiate}. This involves calling instantiate methods that are specific to the type of image being instantiated. @end itemize The final instantiation phase also involves a number of steps. In order to understand these we need to describe a number of concepts. An image is instantiated in a @dfn{domain}, where a domain can be any one of a device, frame, window or image-instance. The domain gives the image-instance context and identity and properties that affect the appearance of the image-instance may be different for the same glyph instantiated in different domains. An example is the face used to display the image-instance. Although an image is instantiated in a particular domain the instantiation domain is not necessarily the domain in which the image-instance is cached. For example a pixmap can be instantiated in a window be actually be cached on a per-device basis. The domain in which the image-instance is actually cached is called the @dfn{governing-domain}. A governing-domain is currently either a device or a window. Widget-glyphs and text-glyphs have a window as a governing-domain, all other image-instances have a device as the governing-domain. The governing domain for an image-instance is determined using the governing_domain image-instance method. @section Widget-Glyphs @cindex widget-glyphs @section Widget-Glyphs in the MS-Windows Environment @cindex widget-glyphs in the MS-Windows environment @cindex MS-Windows environment, widget-glyphs in the To Do @section Widget-Glyphs in the X Environment @cindex widget-glyphs in the X environment @cindex X environment, widget-glyphs in the Widget-glyphs under X make heavy use of lwlib (@pxref{Lucid Widget Library}) for manipulating the native toolkit objects. This is primarily so that different toolkits can be supported for widget-glyphs, just as they are supported for features such as menubars etc. Lwlib is extremely poorly documented and quite hairy so here is my understanding of what goes on. Lwlib maintains a set of widget_instances which mirror the hierarchical state of Xt widgets. I think this is so that widgets can be updated and manipulated generically by the lwlib library. For instance update_one_widget_instance can cope with multiple types of widget and multiple types of toolkit. Each element in the widget hierarchy is updated from its corresponding widget_instance by walking the widget_instance tree recursively. This has desirable properties such as lw_modify_all_widgets which is called from @file{glyphs-x.c} and updates all the properties of a widget without having to know what the widget is or what toolkit it is from. Unfortunately this also has hairy properties such as making the lwlib code quite complex. And of course lwlib has to know at some level what the widget is and how to set its properties. @node Specifiers, Menus, Glyphs, Top @chapter Specifiers @cindex specifiers Not yet documented. Specifiers are documented in depth in the Lisp Reference manual. @xref{Specifiers,,, lispref, XEmacs Lisp Reference Manual}. The code in @file{specifier.c} is pretty straightforward. @node Menus, Events and the Event Loop, Specifiers, Top @chapter Menus @cindex menus A menu is set by setting the value of the variable @code{current-menubar} (which may be buffer-local) and then calling @code{set-menubar-dirty-flag} to signal a change. This will cause the menu to be redrawn at the next redisplay. The format of the data in @code{current-menubar} is described in @file{menubar.c}. Internally the data in current-menubar is parsed into a tree of @code{widget_value's} (defined in @file{lwlib.h}); this is accomplished by the recursive function @code{menu_item_descriptor_to_widget_value()}, called by @code{compute_menubar_data()}. Such a tree is deallocated using @code{free_widget_value()}. @code{update_screen_menubars()} is one of the external entry points. This checks to see, for each screen, if that screen's menubar needs to be updated. This is the case if @enumerate @item @code{set-menubar-dirty-flag} was called since the last redisplay. (This function sets the C variable menubar_has_changed.) @item The buffer displayed in the screen has changed. @item The screen has no menubar currently displayed. @end enumerate @code{set_screen_menubar()} is called for each such screen. This function calls @code{compute_menubar_data()} to create the tree of widget_value's, then calls @code{lw_create_widget()}, @code{lw_modify_all_widgets()}, and/or @code{lw_destroy_all_widgets()} to create the X-Toolkit widget associated with the menu. @code{update_psheets()}, the other external entry point, actually changes the menus being displayed. It uses the widgets fixed by @code{update_screen_menubars()} and calls various X functions to ensure that the menus are displayed properly. The menubar widget is set up so that @code{pre_activate_callback()} is called when the menu is first selected (i.e. mouse button goes down), and @code{menubar_selection_callback()} is called when an item is selected. @code{pre_activate_callback()} calls the function in activate-menubar-hook, which can change the menubar (this is described in @file{menubar.c}). If the menubar is changed, @code{set_screen_menubars()} is called. @code{menubar_selection_callback()} enqueues a menu event, putting in it a function to call (either @code{eval} or @code{call-interactively}) and its argument, which is the callback function or form given in the menu's description. @node Events and the Event Loop, Asynchronous Events; Quit Checking, Menus, Top @chapter Events and the Event Loop @cindex events and the event loop @cindex event loop, events and the @menu * Introduction to Events:: * Main Loop:: * Specifics of the Event Gathering Mechanism:: * Specifics About the Emacs Event:: * Event Queues:: * Event Stream Callback Routines:: * Other Event Loop Functions:: * Stream Pairs:: * Converting Events:: * Dispatching Events; The Command Builder:: * Focus Handling:: * Editor-Level Control Flow Modules:: @end menu @node Introduction to Events, Main Loop, Events and the Event Loop, Events and the Event Loop @section Introduction to Events @cindex events, introduction to An event is an object that encapsulates information about an interesting occurrence in the operating system. Events are generated either by user action, direct (e.g. typing on the keyboard or moving the mouse) or indirect (moving another window, thereby generating an expose event on an Emacs frame), or as a result of some other typically asynchronous action happening, such as output from a subprocess being ready or a timer expiring. Events come into the system in an asynchronous fashion (typically through a callback being called) and are converted into a synchronous event queue (first-in, first-out) in a process that we will call @dfn{collection}. Note that each application has its own event queue. (It is immaterial whether the collection process directly puts the events in the proper application's queue, or puts them into a single system queue, which is later split up.) The most basic level of event collection is done by the operating system or window system. Typically, XEmacs does its own event collection as well. Often there are multiple layers of collection in XEmacs, with events from various sources being collected into a queue, which is then combined with other sources to go into another queue (i.e. a second level of collection), with perhaps another level on top of this, etc. XEmacs has its own types of events (called @dfn{Emacs events}), which provides an abstract layer on top of the system-dependent nature of the most basic events that are received. Part of the complex nature of the XEmacs event collection process involves converting from the operating-system events into the proper Emacs events---there may not be a one-to-one correspondence. Emacs events are documented in @file{events.h}; I'll discuss them later. @node Main Loop, Specifics of the Event Gathering Mechanism, Introduction to Events, Events and the Event Loop @section Main Loop @cindex main loop @cindex events, main loop The @dfn{command loop} is the top-level loop that the editor is always running. It loops endlessly, calling @code{next-event} to retrieve an event and @code{dispatch-event} to execute it. @code{dispatch-event} does the appropriate thing with non-user events (process, timeout, magic, eval, mouse motion); this involves calling a Lisp handler function, redrawing a newly-exposed part of a frame, reading subprocess output, etc. For user events, @code{dispatch-event} looks up the event in relevant keymaps or menubars; when a full key sequence or menubar selection is reached, the appropriate function is executed. @code{dispatch-event} may have to keep state across calls; this is done in the ``command-builder'' structure associated with each console (remember, there's usually only one console), and the engine that looks up keystrokes and constructs full key sequences is called the @dfn{command builder}. This is documented elsewhere. The guts of the command loop are in @code{command_loop_1()}. This function doesn't catch errors, though---that's the job of @code{command_loop_2()}, which is a condition-case (i.e. error-trapping) wrapper around @code{command_loop_1()}. @code{command_loop_1()} never returns, but may get thrown out of. When an error occurs, @code{cmd_error()} is called, which usually invokes the Lisp error handler in @code{command-error}; however, a default error handler is provided if @code{command-error} is @code{nil} (e.g. during startup). The purpose of the error handler is simply to display the error message and do associated cleanup; it does not need to throw anywhere. When the error handler finishes, the condition-case in @code{command_loop_2()} will finish and @code{command_loop_2()} will reinvoke @code{command_loop_1()}. @code{command_loop_2()} is invoked from three places: from @code{initial_command_loop()} (called from @code{main()} at the end of internal initialization), from the Lisp function @code{recursive-edit}, and from @code{call_command_loop()}. @code{call_command_loop()} is called when a macro is started and when the minibuffer is entered; normal termination of the macro or minibuffer causes a throw out of the recursive command loop. (To @code{execute-kbd-macro} for macros and @code{exit} for minibuffers. Note also that the low-level minibuffer-entering function, @code{read-minibuffer-internal}, provides its own error handling and does not need @code{command_loop_2()}'s error encapsulation; so it tells @code{call_command_loop()} to invoke @code{command_loop_1()} directly.) Note that both read-minibuffer-internal and recursive-edit set up a catch for @code{exit}; this is why @code{abort-recursive-edit}, which throws to this catch, exits out of either one. @code{initial_command_loop()}, called from @code{main()}, sets up a catch for @code{top-level} when invoking @code{command_loop_2()}, allowing functions to throw all the way to the top level if they really need to. Before invoking @code{command_loop_2()}, @code{initial_command_loop()} calls @code{top_level_1()}, which handles all of the startup stuff (creating the initial frame, handling the command-line options, loading the user's @file{.emacs} file, etc.). The function that actually does this is in Lisp and is pointed to by the variable @code{top-level}; normally this function is @code{normal-top-level}. @code{top_level_1()} is just an error-handling wrapper similar to @code{command_loop_2()}. Note also that @code{initial_command_loop()} sets up a catch for @code{top-level} when invoking @code{top_level_1()}, just like when it invokes @code{command_loop_2()}. @node Specifics of the Event Gathering Mechanism, Specifics About the Emacs Event, Main Loop, Events and the Event Loop @section Specifics of the Event Gathering Mechanism @cindex event gathering mechanism, specifics of the Here is an approximate diagram of the collection processes at work in XEmacs, under TTY's (TTY's are simpler than X so we'll look at this first): @noindent @example asynch. asynch. asynch. asynch. [Collectors in kbd events kbd events process process the OS] | | output output | | | | | | | | SIGINT, [signal handlers | | | | SIGQUIT, in XEmacs] V V V V SIGWINCH, file file file file SIGALRM desc. desc. desc. desc. | (TTY) (TTY) (pipe) (pipe) | | | | | fake timeouts | | | | file | | | | | desc. | | | | | (pipe) | | | | | | | | | | | | | | | | | | | V V V V V V ------>-----------<----------------<---------------- | | | [collected using @code{select()} in @code{emacs_tty_next_event()} | and converted to the appropriate Emacs event] | | V (above this line is TTY-specific) Emacs ----------------------------------------------- event (below this line is the generic event mechanism) | | was there if not, call a SIGINT? @code{emacs_tty_next_event()} | | | | | | V V --->------<---- | | [collected in @code{event_stream_next_event()}; | SIGINT is converted using @code{maybe_read_quit_event()}] V Emacs event | \---->------>----- maybe_kbd_translate() ---->---\ | | | command event queue | if not from command (contains events that were event queue, call read earlier but not processed, @code{event_stream_next_event()} typically when waiting in a | sit-for, sleep-for, etc. for | a particular event to be received) | | | | | V V ---->------------------------------------<---- | | [collected in | @code{next_event_internal()}] | unread- unread- event from | command- command- keyboard else, call events event macro @code{next_event_internal()} | | | | | | | | | | | | V V V V --------->----------------------<------------ | | [collected in @code{next-event}, which may loop | more than once if the event it gets is on | a dead frame, device, etc.] | | V feed into top-level event loop, which repeatedly calls @code{next-event} and then dispatches the event using @code{dispatch-event} @end example Notice the separation between TTY-specific and generic event mechanism. When using the Xt-based event loop, the TTY-specific stuff is replaced but the rest stays the same. It's also important to realize that only one different kind of system-specific event loop can be operating at a time, and must be able to receive all kinds of events simultaneously. For the two existing event loops (implemented in @file{event-tty.c} and @file{event-Xt.c}, respectively), the TTY event loop @emph{only} handles TTY consoles, while the Xt event loop handles @emph{both} TTY and X consoles. This situation is different from all of the output handlers, where you simply have one per console type. Here's the Xt Event Loop Diagram (notice that below a certain point, it's the same as the above diagram): @example asynch. asynch. asynch. asynch. [Collectors in kbd kbd process process the OS] events events output output | | | | | | | | asynch. asynch. [Collectors in the | | | | X X OS and X Window System] | | | | events events | | | | | | | | | | | | | | | | | | SIGINT, [signal handlers | | | | | | SIGQUIT, in XEmacs] | | | | | | SIGWINCH, | | | | | | SIGALRM | | | | | | | | | | | | | | | | | | | | | timeouts | | | | | | | | | | | | | | | | | | | | | | V | V V V V V V fake | file file file file file file file | desc. desc. desc. desc. desc. desc. desc. | (TTY) (TTY) (pipe) (pipe) (socket) (socket) (pipe) | | | | | | | | | | | | | | | | | | | | | | | | | V V V V V V V V --->----------------------------------------<---------<------ | | | | | |[collected using @code{select()} in | | | @code{_XtWaitForSomething()}, called | | | from @code{XtAppProcessEvent()}, called | | | in @code{emacs_Xt_next_event()}; | | | dispatched to various callbacks] | | | | | | emacs_Xt_ p_s_callback(), | [popup_selection_callback] event_handler() x_u_v_s_callback(),| [x_update_vertical_scrollbar_ | x_u_h_s_callback(),| callback] | search_callback() | [x_update_horizontal_scrollbar_ | | | callback] | | | | | | enqueue_Xt_ signal_special_ | dispatch_event() Xt_user_event() | [maybe multiple | | times, maybe 0 | | times] | | | enqueue_Xt_ | | dispatch_event() | | | | | | | V V | -->----------<-- | | | | | dispatch @code{Xt_what_callback()} event sets flags queue | | | | | | | | | ---->-----------<-------- | | | [collected and converted as appropriate in | @code{emacs_Xt_next_event()}] | | V (above this line is Xt-specific) Emacs ------------------------------------------------ event (below this line is the generic event mechanism) | | was there if not, call a SIGINT? @code{emacs_Xt_next_event()} | | | | | | V V --->-------<---- | | [collected in @code{event_stream_next_event()}; | SIGINT is converted using @code{maybe_read_quit_event()}] V Emacs event | \---->------>----- maybe_kbd_translate() -->-----\ | | | command event queue | if not from command (contains events that were event queue, call read earlier but not processed, @code{event_stream_next_event()} typically when waiting in a | sit-for, sleep-for, etc. for | a particular event to be received) | | | | | V V ---->----------------------------------<------ | | [collected in | @code{next_event_internal()}] | unread- unread- event from | command- command- keyboard else, call events event macro @code{next_event_internal()} | | | | | | | | | | | | V V V V --------->----------------------<------------ | | [collected in @code{next-event}, which may loop | more than once if the event it gets is on | a dead frame, device, etc.] | | V feed into top-level event loop, which repeatedly calls @code{next-event} and then dispatches the event using @code{dispatch-event} @end example @node Specifics About the Emacs Event, Event Queues, Specifics of the Event Gathering Mechanism, Events and the Event Loop @section Specifics About the Emacs Event @cindex event, specifics about the Lisp object @node Event Queues, Event Stream Callback Routines, Specifics About the Emacs Event, Events and the Event Loop @section Event Queues @cindex event queues @cindex queues, event There are two event queues here -- the command event queue (#### which should be called "deferred event queue" and is in my glyph ws) and the dispatch event queue. (MS Windows actually has an extra dispatch queue for non-user events and uses the generic one only for user events. This is because user and non-user events in Windows come through the same place -- the window procedure -- but under X, it's possible to selectively process events such that we take all the user events before the non-user ones. #### In fact, given the way we now drain the queue, we might need two separate queues, like under Windows. Need to think carefully exactly how this works, and should certainly generalize the two different queues. The dispatch queue (which used to occur duplicated inside of each event implementation) is used for events that have been read from the window-system event queue(s) and not yet process by @code{next_event_internal()}. It exists for two reasons: (1) because in many implementations, events often come from the window system by way of callbacks, and need to push the event to be returned onto a queue; (2) in order to handle QUIT in a guaranteed correct fashion without resorting to weird implementation-specific hacks that may or may not work well, we need to drain the window-system event queues and then look through to see if there's an event matching quit-char (usually ^G). the drained events need to go onto a queue. (There are other, similar cases where we need to drain the pending events so we can look ahead -- for example, checking for pending expose events under X to avoid excessive server activity.) The command event queue is used @strong{AFTER} an event has been read from @code{next_event_internal()}, when it needs to be pushed back. This includes, for example, @code{accept-process-output}, @code{sleep-for} and @code{wait_delaying_user_input()}. Eval events and the like, generated by @code{enqueue-eval-event}, @code{enqueue_magic_eval_event()}, etc. are also pushed onto this queue. Some events generated by callbacks are also pushed onto this queue, #### although maybe shouldn't be. The command queue takes precedence over the dispatch queue. #### It is worth investigating to see whether both queues are really needed, and how exactly they should be used. @code{enqueue-eval-event}, for example, could certainly push onto the dispatch queue, and all callbacks maybe should. @code{wait_delaying_user_input()} seems to need both queues, since it can take events from the dispatch queue and push them onto the command queue; but it perhaps could be rewritten to avoid this. #### In general we need to review the handling of these two queues, figure out exactly what ought to be happening, and document it. @node Event Stream Callback Routines, Other Event Loop Functions, Event Queues, Events and the Event Loop @section Event Stream Callback Routines @cindex event stream callback routines @cindex callback routines, event stream There is one object called an event_stream. This object contains callback functions for doing the window-system-dependent operations that XEmacs requires. If XEmacs is compiled with support for X11 and the X Toolkit, then this event_stream structure will contain functions that can cope with input on XEmacs windows on multiple displays, as well as input from dumb tty frames. If it is desired to have XEmacs able to open frames on the displays of multiple heterogeneous machines, X11 and SunView, or X11 and NeXT, for example, then it will be necessary to construct an event_stream structure that can cope with the given types. Currently, the only implemented event_streams are for dumb-ttys, and for X11 plus dumb-ttys, and for mswindows. To implement this for one window system is relatively simple. To implement this for multiple window systems is trickier and may not be possible in all situations, but it's been done for X and TTY. Note that these callbacks are @strong{NOT} console methods; that's because the routines are not specific to a particular console type but must be able to simultaneously cope with all allowable console types. The slots of the event_stream structure: @table @code @item next_event_cb A function which fills in an XEmacs_event structure with the next event available. If there is no event available, then this should block. IMPORTANT: timer events and especially process events *must not* be returned if there are events of other types available; otherwise you can end up with an infinite loop in @code{Fdiscard_input()}. @item event_pending_cb A function which says whether there are events to be read. If called with an argument of 0, then this should say whether calling the @code{next_event_cb} will block. If called with a non-zero argument, then this should say whether there are that many user-generated events pending (that is, keypresses, mouse-clicks, dialog-box selection events, etc.). (This is used for redisplay optimization, among other things.) The difference is that the former includes process events and timer events, but the latter doesn't. If this function is not sure whether there are events to be read, it @strong{must} return 0. Otherwise various undesirable effects will occur, such as redisplay not occurring until the next event occurs. @item handle_magic_event_cb XEmacs calls this with an event structure which contains window-system dependent information that XEmacs doesn't need to know about, but which must happen in order. If the @code{next_event_cb} never returns an event of type "magic", this will never be used. @item format_magic_event_cb Called with a magic event; print a representation of the innards of the event to @var{PSTREAM}. @item compare_magic_event_cb Called with two magic events; return non-zero if the innards of the two are equal, zero otherwise. @item hash_magic_event_cb Called with a magic event; return a hash of the innards of the event. @item add_timeout_cb Called with an @var{EMACS_TIME}, the absolute time at which a wakeup event should be generated; and a void *, which is an arbitrary value that will be returned in the timeout event. The timeouts generated by this function should be one-shots: they fire once and then disappear. This callback should return an int id-number which uniquely identifies this wakeup. If an implementation doesn't have microseconds or millisecond granularity, it should round up to the closest value it can deal with. @item remove_timeout_cb Called with an int, the id number of a wakeup to discard. This id number must have been returned by the @code{add_timeout_cb}. If the given wakeup has already expired, this should do nothing. @item select_process_cb @item unselect_process_cb These callbacks tell the underlying implementation to add or remove a file descriptor from the list of fds which are polled for inferior-process input. When input becomes available on the given process connection, an event of type "process" should be generated. @item select_console_cb @item unselect_console_cb These callbacks tell the underlying implementation to add or remove a console from the list of consoles which are polled for user-input. @item select_device_cb @item unselect_device_cb These callbacks are used by Unixoid event loops (those that use @code{select()} and file descriptors and have a separate input fd per device). @item create_io_streams_cb @item delete_io_streams_cb These callbacks are called by process code to create the input and output lstreams which are used for subprocess I/O. @item quitp_cb A handler function called from the @code{QUIT} macro which should check whether the quit character has been typed. On systems with SIGIO, this will not be called unless the @code{sigio_happened} flag is true (it is set from the SIGIO handler). @end table XEmacs has its own event structures, which are distinct from the event structures used by X or any other window system. It is the job of the event_stream layer to translate to this format. @node Other Event Loop Functions, Stream Pairs, Event Stream Callback Routines, Events and the Event Loop @section Other Event Loop Functions @cindex event loop functions, other @code{detect_input_pending()} and @code{input-pending-p} look for input by calling @code{event_stream->event_pending_p} and looking in @code{[V]unread-command-event} and the @code{command_event_queue} (they do not check for an executing keyboard macro, though). @code{discard-input} cancels any command events pending (and any keyboard macros currently executing), and puts the others onto the @code{command_event_queue}. There is a comment about a ``race condition'', which is not a good sign. @code{next-command-event} and @code{read-char} are higher-level interfaces to @code{next-event}. @code{next-command-event} gets the next @dfn{command} event (i.e. keypress, mouse event, menu selection, or scrollbar action), calling @code{dispatch-event} on any others. @code{read-char} calls @code{next-command-event} and uses @code{event_to_character()} to return the character equivalent. With the right kind of input method support, it is possible for (read-char) to return a Kanji character. @node Stream Pairs, Converting Events, Other Event Loop Functions, Events and the Event Loop @section Stream Pairs @cindex stream pairs @cindex pairs, stream Since there are many possible processes/event loop combinations, the event code is responsible for creating an appropriate lstream type. The process implementation does not care about that implementation. The Create stream pair function is passed two void* values, which identify process-dependent 'handles'. The process implementation uses these handles to communicate with child processes. The function must be prepared to receive handle types of any process implementation. Since only one process implementation exists in a particular XEmacs configuration, preprocessing is a means of compiling in the support for the code which deals with particular handle types. For example, a unixoid type loop, which relies on file descriptors, may be asked to create a pair of streams by a unix-style process implementation. In this case, the handles passed are unix file descriptors, and the code may deal with these directly. Although, the same code may be used on Win32 system with X-Windows. In this case, Win32 process implementation passes handles of type HANDLE, and the @code{create_io_streams} function must call appropriate function to get file descriptors given HANDLEs, so that these descriptors may be passed to @code{XtAddInput}. The handle given may have special denying value, in which case the corresponding lstream should not be created. The return value of the function is a unique stream identifier. It is used by processes implementation, in its platform-independent part. There is the get_process_from_usid function, which returns process object given its USID. The event stream is responsible for converting its internal handle type into USID. Example is the TTY event stream. When a file descriptor signals input, the event loop must determine process to which the input is destined. Thus, the implementation uses process input stream file descriptor as USID, by simply casting the fd value to USID type. There are two special USID values. One, @code{USID_ERROR}, indicates that the stream pair cannot be created. The second, @code{USID_DONTHASH}, indicates that streams are created, but the event stream does not wish to be able to find the process by its USID. Specifically, if an event stream implementation never calls @code{get_process_from_usid}, this value should always be returned, to prevent accumulating useless information on USID to process relationship. @node Converting Events, Dispatching Events; The Command Builder, Stream Pairs, Events and the Event Loop @section Converting Events @cindex converting events @cindex events, converting @code{character_to_event()}, @code{event_to_character()}, @code{event-to-character}, and @code{character-to-event} convert between characters and keypress events corresponding to the characters. If the event was not a keypress, @code{event_to_character()} returns -1 and @code{event-to-character} returns @code{nil}. These functions convert between character representation and the split-up event representation (keysym plus mod keys). @node Dispatching Events; The Command Builder, Focus Handling, Converting Events, Events and the Event Loop @section Dispatching Events; The Command Builder @cindex dispatching events; the command builder @cindex events; the command builder, dispatching @cindex command builder, dispatching events; the Not yet documented. @node Focus Handling, Editor-Level Control Flow Modules, Dispatching Events; The Command Builder, Events and the Event Loop @section Focus Handling @cindex focus handling Ben's capsule lecture on focus: In GNU Emacs @code{select-frame} never changes the window-manager frame focus. All it does is change the "selected frame". This is similar to what happens when we call @code{select-device} or @code{select-console}. Whenever an event comes in (including a keyboard event), its frame is selected; therefore, evaluating @code{select-frame} in @samp{*scratch*} won't cause any effects because the next received event (in the same frame) will cause a switch back to the frame displaying @samp{*scratch*}. Whenever a focus-change event is received from the window manager, it generates a @code{switch-frame} event, which causes the Lisp function @code{handle-switch-frame} to get run. This basically just runs @code{select-frame} (see below, however). In GNU Emacs, if you want to have an operation run when a frame is selected, you supply an event binding for @code{switch-frame} (and then maybe call @code{handle-switch-frame}, or something ...). In XEmacs, we @strong{do} change the window-manager frame focus as a result of @code{select-frame}, but not until the next time an event is received, so that a function that momentarily changes the selected frame won't cause WM focus flashing. (#### There's something not quite right here; this is causing the wrong-cursor-focus problems that you occasionally see. But the general idea is correct.) This approach is winning for people who use the explicit-focus model, but is trickier to implement. We also don't make the @code{switch-frame} event visible but instead have @code{select-frame-hook}, which is a better approach. There is the problem of surrogate minibuffers, where when we enter the minibuffer, you essentially want to temporarily switch the WM focus to the frame with the minibuffer, and switch it back when you exit the minibuffer. GNU Emacs solves this with the crockish @code{redirect-frame-focus}, which says "for keyboard events received from FRAME, act like they're coming from FOCUS-FRAME". I think what this means is that, when a keyboard event comes in and the event manager is about to select the event's frame, if that frame has its focus redirected, the redirected-to frame is selected instead. That way, if you're in a minibufferless frame and enter the minibuffer, then all Lisp functions that run see the selected frame as the minibuffer's frame rather than the minibufferless frame you came from, so that (e.g.) your typing actually appears in the minibuffer's frame and things behave sanely. There's also some weird logic that switches the redirected frame focus from one frame to another if Lisp code explicitly calls @code{select-frame} (but not if @code{handle-switch-frame} is called), and saves and restores the frame focus in window configurations, etc. etc. All of this logic is heavily @code{#if 0}'d, with lots of comments saying "No, this approach doesn't seem to work, so I'm trying this ... is it reasonable? Well, I'm not sure ..." that are a red flag indicating crockishness. Because of our way of doing things, we can avoid all this crock. Keyboard events never cause a select-frame (who cares what frame they're associated with? They come from a console, only). We change the actual WM focus to a surrogate minibuffer frame, so we don't have to do any internal redirection. In order to get the focus back, I took the approach in @file{minibuf.el} of just checking to see if the frame we moved to is still the selected frame, and move back to the old one if so. Conceivably we might have to do the weird "tracking" that GNU Emacs does when @code{select-frame} is called, but I don't think so. If the selected frame moved from the minibuffer frame, then we just leave it there, figuring that someone knows what they're doing. Because we don't have any redirection recorded anywhere, it's safe to do this, and we don't end up with unwanted redirection. @node Editor-Level Control Flow Modules, , Focus Handling, Events and the Event Loop @section Editor-Level Control Flow Modules @cindex control flow modules, editor-level @cindex modules, editor-level control flow @example @file{event-Xt.c} @file{event-msw.c} @file{event-stream.c} @file{event-tty.c} @file{events-mod.h} @file{gpmevent.c} @file{gpmevent.h} @file{events.c} @file{events.h} @end example These implement the handling of events (user input and other system notifications). @file{events.c} and @file{events.h} define the @dfn{event} Lisp object type and primitives for manipulating it. @file{event-stream.c} implements the basic functions for working with event queues, dispatching an event by looking it up in relevant keymaps and such, and handling timeouts; this includes the primitives @code{next-event} and @code{dispatch-event}, as well as related primitives such as @code{sit-for}, @code{sleep-for}, and @code{accept-process-output}. (@file{event-stream.c} is one of the hairiest and trickiest modules in XEmacs. Beware! You can easily mess things up here.) @file{event-Xt.c} and @file{event-tty.c} implement the low-level interfaces onto retrieving events from Xt (the X toolkit) and from TTY's (using @code{read()} and @code{select()}), respectively. The event interface enforces a clean separation between the specific code for interfacing with the operating system and the generic code for working with events, by defining an API of basic, low-level event methods; @file{event-Xt.c} and @file{event-tty.c} are two different implementations of this API. To add support for a new operating system (e.g. NeXTstep), one merely needs to provide another implementation of those API functions. Note that the choice of whether to use @file{event-Xt.c} or @file{event-tty.c} is made at compile time! Or at the very latest, it is made at startup time. @file{event-Xt.c} handles events for @emph{both} X and TTY frames; @file{event-tty.c} is only used when X support is not compiled into XEmacs. The reason for this is that there is only one event loop in XEmacs: thus, it needs to be able to receive events from all different kinds of frames. @example @file{keymap.c} @file{keymap.h} @end example @file{keymap.c} and @file{keymap.h} define the @dfn{keymap} Lisp object type and associated methods and primitives. (Remember that keymaps are objects that associate event descriptions with functions to be called to ``execute'' those events; @code{dispatch-event} looks up events in the relevant keymaps.) @example @file{cmdloop.c} @end example @file{cmdloop.c} contains functions that implement the actual editor command loop---i.e. the event loop that cyclically retrieves and dispatches events. This code is also rather tricky, just like @file{event-stream.c}. @example @file{macros.c} @file{macros.h} @end example These two modules contain the basic code for defining keyboard macros. These functions don't actually do much; most of the code that handles keyboard macros is mixed in with the event-handling code in @file{event-stream.c}. @example @file{minibuf.c} @end example This contains some miscellaneous code related to the minibuffer (most of the minibuffer code was moved into Lisp by Richard Mlynarik). This includes the primitives for completion (although filename completion is in @file{dired.c}), the lowest-level interface to the minibuffer (if the command loop were cleaned up, this too could be in Lisp), and code for dealing with the echo area (this, too, was mostly moved into Lisp, and the only code remaining is code to call out to Lisp or provide simple bootstrapping implementations early in temacs, before the echo-area Lisp code is loaded). @node Asynchronous Events; Quit Checking, Lstreams, Events and the Event Loop, Top @chapter Asynchronous Events; Quit Checking @cindex asynchronous events; quit checking @cindex asynchronous events @menu * Signal Handling:: * Control-G (Quit) Checking:: * Profiling:: * Asynchronous Timeouts:: * Exiting:: @end menu @node Signal Handling, Control-G (Quit) Checking, Asynchronous Events; Quit Checking, Asynchronous Events; Quit Checking @section Signal Handling @cindex signal handling @node Control-G (Quit) Checking, Profiling, Signal Handling, Asynchronous Events; Quit Checking @section Control-G (Quit) Checking @cindex Control-g checking @cindex C-g checking @cindex quit checking @cindex QUIT checking @cindex critical quit @emph{Note}: The code to handle QUIT is divided between @file{lisp.h} and @file{signal.c}. There is also some special-case code in the async timer code in @file{event-stream.c} to notice when the poll-for-quit (and poll-for-sigchld) timers have gone off. Here's an overview of how this convoluted stuff works: @enumerate @item Scattered throughout the XEmacs core code are calls to the macro QUIT; This macro checks to see whether a @kbd{C-g} has recently been pressed and not yet handled, and if so, it handles the @kbd{C-g} by calling @code{signal_quit()}, which invokes the standard @code{Fsignal()} code, with the error being @code{Qquit}. Lisp code can establish handlers for this (using @code{condition-case}), but normally there is no handler, and so execution is thrown back to the innermost enclosing event loop. (One of the things that happens when entering an event loop is that a @code{condition-case} is established that catches @strong{all} calls to @code{signal}, including this one.) @item How does the QUIT macro check to see whether @kbd{C-g} has been pressed; obviously this needs to be extremely fast. Now for some history. In early Lemacs as inherited from the FSF going back 15 years or more, there was a great fondness for using SIGIO (which is sent whenever there is I/O available on a given socket, tty, etc.). In fact, in GNU Emacs, perhaps even today, all reading of events from the X server occurs inside the SIGIO handler! This is crazy, but not completely relevant. What is relevant is that similar stuff happened inside the SIGIO handler for @kbd{C-g}: it searched through all the pending (i.e. not yet delivered to XEmacs yet) X events for one that matched @kbd{C-g}. When it saw a match, it set Vquit_flag to Qt. On TTY's, @kbd{C-g} is actually mapped to be the interrupt character (i.e. it generates SIGINT), and XEmacs's handler for this signal sets Vquit_flag to Qt. Then, sometime later after the signal handlers finished and a QUIT macro was called, the macro noticed the setting of @code{Vquit_flag} and used this as an indication to call @code{signal_quit()}. What @code{signal_quit()} actually does is set @code{Vquit_flag} to Qnil (so that we won't get repeated interruptions from a single @kbd{C-g} press) and then calls the equivalent of (signal 'quit nil). @item Another complication is introduced in that Vquit_flag is actually exported to Lisp as @code{quit-flag}. This allows users some level of control over whether and when @kbd{C-g} is processed as quit, esp. in combination with @code{inhibit-quit}. This is another Lisp variable, and if set to non-nil, it inhibits @code{signal_quit()} from getting called, meaning that the @kbd{C-g} gets essentially ignored. But not completely: Because the resetting of @code{quit-flag} happens only in @code{signal_quit()}, which isn't getting called, the @kbd{C-g} press is still noticed, and as soon as @code{inhibit-quit} is set back to nil, a quit will be signalled at the next QUIT macro. Thus, what @code{inhibit-quit} really does is defer quits until after the quit- inhibitted period. @item Another consideration, introduced by XEmacs, is critical quitting. If you press @kbd{Control-Shift-G} instead of just @kbd{C-g}, @code{quit-flag} is set to @code{critical} instead of to t. When QUIT processes this value, it @strong{ignores} the value of @code{inhibit-quit}. This allows you to quit even out of a quit-inhibitted section of code! Furthermore, when @code{signal_quit()} notices that it was invoked as a result of a critical quit, it automatically invokes the debugger (which otherwise would only happen when @code{debug-on-quit} is set to t). @item Well, I explained above about how @code{quit-flag} gets set correctly, but I began with a disclaimer stating that this was the old way of doing things. What's done now? Well, first of all, the SIGIO handler (which formerly checked all pending events to see if there's a @kbd{C-g}) now does nothing but set a flag -- or actually two flags, something_happened and quit_check_signal_happened. There are two flags because the QUIT macro is now used for more than just handling QUIT; it's also used for running asynchronous timeout handlers that have recently expired, and perhaps other things. The idea here is that the QUIT macros occur extremely often in the code, but only occur at places that are relatively safe -- in particular, if an error occurs, nothing will get completely trashed. @item Now, let's look at QUIT again. @item UNFINISHED. Note, however, that as of the point when this comment got committed to CVS (mid-2001), the interaction between reading @kbd{C-g} as an event and processing it as QUIT was overhauled to (for the first time) be understandable and actually work correctly. Now, the way things work is that if @kbd{C-g} is pressed while XEmacs is blocking at the top level, waiting for a user event, it will be read as an event; otherwise, it will cause QUIT. (This includes times when XEmacs is blocking, but not waiting for a user event, e.g. @code{accept-process-output} and @code{wait_delaying_user_events()}.) Formerly, this was supposed to happen, but didn't always due to a bizarre and broken scheme, documented in @code{next_event_internal} like this: @quotation If we read a @kbd{C-g}, then set @code{quit-flag} but do not discard the @kbd{C-g}. The callers of @code{next_event_internal()} will do one of two things: @enumerate @item set @code{Vquit_flag} to Qnil. (@code{next-event} does this.) This will cause the ^G to be treated as a normal keystroke. @item not change @code{Vquit_flag} but attempt to enqueue the ^G, at which point it will be discarded. The next time QUIT is called, it will notice that @code{Vquit_flag} was set. @end enumerate @end quotation This required weirdness in @code{enqueue_command_event_1} like this: @quotation put the event on the typeahead queue, unless the event is the quit char, in which case the @code{QUIT} which will occur on the next trip through this loop is all the processing we should do - leaving it on the queue would cause the quit to be processed twice. @end quotation And further weirdness elsewhere, none of which made any sense, and didn't work, because (e.g.) it required that QUIT never happen anywhere inside @code{next_event_internal()} or any callers when @kbd{C-g} should be read as a user event, which was impossible to implement in practice. Now what we do is fairly simple. Callers of @code{next_event_internal()} that want @kbd{C-g} read as a user event call @code{begin_dont_check_for_quit()}. @code{next_event_internal()}, when it gets a @kbd{C-g}, simply sets @code{Vquit_flag} (just as when a @kbd{C-g} is detected during the operation of @code{QUIT} or @code{QUITP}), and then tries to @code{QUIT}. This will fail if blocked by the previous call, at which point @code{next_event_internal()} will return the @kbd{C-g} as an event. To unblock things, first set @code{Vquit_flag} to nil (it was set to t when the @kbd{C-g} was read, and if we don't reset it, the next call to @code{QUIT} will quit), and then @code{unbind_to()} the depth returned by @code{begin_dont_check_for_quit()}. It makes no difference is @code{QUIT} is called a zillion times in @code{next_event_internal()} or anywhere else, because it's blocked and will never signal. @end enumerate @subsection Reentrancy Problems due to QUIT Checking Checking for QUIT can do quite a long of things -- since it pumps the event loop, this may cause arbitrary code to get executed, garbage collection to happen. etc. (In fact, garbage collection cannot happen because it is inhibited.) This has led to crashes when functions get called reentrantly when not expecting it. Example: @subheading Crash -- reentrant @code{re_match_2()} @example /* dont_check_for_quit is set in three circumstances: (1) when we are in the process of changing the window configuration. The frame might be in an inconsistent state, which will cause assertion failures if we check for QUIT. (2) when we are reading events, and want to read the C-g as an event. The normal check for quit will discard the C-g, which would be bad. (3) when we're going down with a fatal error. we're most likely in an inconsistent state, and we definitely don't want to be interrupted. */ /* We should *not* conditionalize on Vinhibit_quit, or critical-quit (Control-Shift-G) won't work right. */ /* WARNING: Even calling check_quit(), without actually dispatching a quit signal, can result in arbitrary Lisp code getting executed -- at least under Windows. (Not to mention obvious Lisp invocations like asynchronous timer callbacks.) Here's a sample stack trace to demonstrate: NTDLL! DbgBreakPoint@@0 address 0x77f9eea9 assert_failed(const char * 0x012d036c, int 4596, const char * 0x012d0354) line 3478 re_match_2_internal(re_pattern_buffer * 0x012d6780, const unsigned char * 0x00000000, int 0, const unsigned char * 0x022f9328, int 34, int 0, re_registers * 0x012d53d0 search_regs, int 34) line 4596 + 41 bytes re_search_2(re_pattern_buffer * 0x012d6780, const char * 0x00000000, int 0, const char * 0x022f9328, int 34, int 0, int 34, re_registers * 0x012d53d0 search_regs, int 34) line 4269 + 37 bytes re_search(re_pattern_buffer * 0x012d6780, const char * 0x022f9328, int 34, int 0, int 34, re_registers * 0x012d53d0 search_regs) line 4031 + 37 bytes string_match_1(long 31222628, long 30282164, long 28377092, buffer * 0x022fde00, int 0) line 413 + 69 bytes Fstring_match(long 31222628, long 30282164, long 28377092, long 28377092) line 436 + 34 bytes Ffuncall(int 3, long * 0x008297f8) line 3488 + 168 bytes execute_optimized_program(const unsigned char * 0x020ddc50, int 6, long * 0x020ddf50) line 744 + 16 bytes funcall_compiled_function(long 34407748, int 1, long * 0x00829aec) line 516 + 53 bytes Ffuncall(int 2, long * 0x00829ae8) line 3523 + 17 bytes execute_optimized_program(const unsigned char * 0x020ddc90, int 4, long * 0x020ddf90) line 744 + 16 bytes funcall_compiled_function(long 34407720, int 1, long * 0x00829e28) line 516 + 53 bytes Ffuncall(int 2, long * 0x00829e24) line 3523 + 17 bytes mapcar1(long 15, long * 0x00829e48, long 34447820, long 34187868) line 2929 + 11 bytes Fmapcar(long 34447820, long 34187868) line 3035 + 21 bytes Ffuncall(int 3, long * 0x00829f20) line 3488 + 93 bytes execute_optimized_program(const unsigned char * 0x020c2b70, int 7, long * 0x020dd010) line 744 + 16 bytes funcall_compiled_function(long 34407580, int 2, long * 0x0082a210) line 516 + 53 bytes Ffuncall(int 3, long * 0x0082a20c) line 3523 + 17 bytes execute_optimized_program(const unsigned char * 0x020cf810, int 6, long * 0x020cfb10) line 744 + 16 bytes funcall_compiled_function(long 34407524, int 0, long * 0x0082a580) line 516 + 53 bytes Ffuncall(int 1, long * 0x0082a57c) line 3523 + 17 bytes run_hook_with_args_in_buffer(buffer * 0x022fde00, int 1, long * 0x0082a57c, int 0) line 3980 + 13 bytes run_hook_with_args(int 1, long * 0x0082a57c, int 0) line 3993 + 23 bytes Frun_hooks(int 1, long * 0x0082a57c) line 3847 + 19 bytes run_hook(long 34447484) line 4094 + 11 bytes unsafe_handle_wm_initmenu_1(frame * 0x01dbb000) line 736 + 11 bytes unsafe_handle_wm_initmenu(long 28377092) line 807 + 11 bytes condition_case_1(long 28377116, long (long)* 0x0101c827 unsafe_handle_wm_initmenu(long), long 28377092, long (long, long)* 0x01005fa4 mswindows_modal_loop_error_handler(long, long), long 28377092) line 1692 + 7 bytes mswindows_protect_modal_loop(long (long)* 0x0101c827 unsafe_handle_wm_initmenu(long), long 28377092) line 1194 + 32 bytes mswindows_handle_wm_initmenu(HMENU__ * 0x00010199, frame * 0x01dbb000) line 826 + 17 bytes mswindows_wnd_proc(HWND__ * 0x000501da, unsigned int 278, unsigned int 65945, long 0) line 3089 + 31 bytes USER32! UserCallWinProc@@20 + 24 bytes USER32! DispatchClientMessage@@20 + 47 bytes USER32! __fnDWORD@@4 + 34 bytes NTDLL! KiUserCallbackDispatcher@@12 + 19 bytes USER32! DispatchClientMessage@@20 address 0x77e163cc USER32! DefWindowProcW@@16 + 34 bytes qxeDefWindowProc(HWND__ * 0x000501da, unsigned int 274, unsigned int 61696, long 98) line 1188 + 22 bytes mswindows_wnd_proc(HWND__ * 0x000501da, unsigned int 274, unsigned int 61696, long 98) line 3362 + 21 bytes USER32! UserCallWinProc@@20 + 24 bytes USER32! DispatchClientMessage@@20 + 47 bytes USER32! __fnDWORD@@4 + 34 bytes NTDLL! KiUserCallbackDispatcher@@12 + 19 bytes USER32! DispatchClientMessage@@20 address 0x77e163cc USER32! DefWindowProcW@@16 + 34 bytes qxeDefWindowProc(HWND__ * 0x000501da, unsigned int 262, unsigned int 98, long 540016641) line 1188 + 22 bytes mswindows_wnd_proc(HWND__ * 0x000501da, unsigned int 262, unsigned int 98, long 540016641) line 3362 + 21 bytes USER32! UserCallWinProc@@20 + 24 bytes USER32! DispatchMessageWorker@@8 + 244 bytes USER32! DispatchMessageW@@4 + 11 bytes qxeDispatchMessage(const tagMSG * 0x0082c684 @{msg=0x00000106 wp=0x00000062 lp=0x20300001@}) line 989 + 10 bytes mswindows_drain_windows_queue() line 1345 + 9 bytes emacs_mswindows_quit_p() line 3947 event_stream_quit_p() line 666 check_quit() line 686 check_what_happened() line 437 re_match_2_internal(re_pattern_buffer * 0x012d5a18, const unsigned char * 0x00000000, int 0, const unsigned char * 0x02235000, int 23486, int 14645, re_registers * 0x012d53d0 search_regs, int 23486) line 4717 + 14 bytes re_search_2(re_pattern_buffer * 0x012d5a18, const char * 0x02235000, int 23486, const char * 0x0223b38e, int 0, int 14645, int 8841, re_registers * 0x012d53d0 search_regs, int 23486) line 4269 + 37 bytes search_buffer(buffer * 0x022fde00, long 29077572, long 13789, long 23487, long 1, int 1, long 28377092, long 28377092, int 0) line 1224 + 89 bytes search_command(long 29077572, long 46975, long 28377116, long 28377092, long 28377092, int 1, int 1, int 0) line 1054 + 151 bytes Fre_search_forward(long 29077572, long 46975, long 28377116, long 28377092, long 28377092) line 2147 + 31 bytes Ffuncall(int 4, long * 0x0082ceb0) line 3488 + 216 bytes execute_optimized_program(const unsigned char * 0x02047810, int 13, long * 0x02080c10) line 744 + 16 bytes funcall_compiled_function(long 34187208, int 3, long * 0x0082d1b8) line 516 + 53 bytes Ffuncall(int 4, long * 0x0082d1b4) line 3523 + 17 bytes execute_optimized_program(const unsigned char * 0x01e96a10, int 6, long * 0x020ae510) line 744 + 16 bytes funcall_compiled_function(long 34186676, int 3, long * 0x0082d4a0) line 516 + 53 bytes Ffuncall(int 4, long * 0x0082d49c) line 3523 + 17 bytes execute_optimized_program(const unsigned char * 0x02156b50, int 4, long * 0x020c2db0) line 744 + 16 bytes funcall_compiled_function(long 34186564, int 2, long * 0x0082d780) line 516 + 53 bytes Ffuncall(int 3, long * 0x0082d77c) line 3523 + 17 bytes execute_optimized_program(const unsigned char * 0x0082d964, int 3, long * 0x020c2d70) line 744 + 16 bytes Fbyte_code(long 29405156, long 34352480, long 7) line 2392 + 38 bytes Feval(long 34354440) line 3290 + 187 bytes condition_case_1(long 34354572, long (long)* 0x01087232 Feval(long), long 34354440, long (long, long)* 0x01084764 run_condition_case_handlers(long, long), long 28377092) line 1692 + 7 bytes condition_case_3(long 34354440, long 28377092, long 34354572) line 1779 + 27 bytes execute_rare_opcode(long * 0x0082dc7c, const unsigned char * 0x01b090af, int 143) line 1269 + 19 bytes execute_optimized_program(const unsigned char * 0x01b09090, int 6, long * 0x020ae590) line 654 + 17 bytes funcall_compiled_function(long 34186620, int 0, long * 0x0082df68) line 516 + 53 bytes Ffuncall(int 1, long * 0x0082df64) line 3523 + 17 bytes execute_optimized_program(const unsigned char * 0x02195470, int 1, long * 0x020c2df0) line 744 + 16 bytes funcall_compiled_function(long 34186508, int 0, long * 0x0082e23c) line 516 + 53 bytes Ffuncall(int 1, long * 0x0082e238) line 3523 + 17 bytes execute_optimized_program(const unsigned char * 0x01e5d410, int 6, long * 0x0207d410) line 744 + 16 bytes funcall_compiled_function(long 34186312, int 1, long * 0x0082e524) line 516 + 53 bytes Ffuncall(int 2, long * 0x0082e520) line 3523 + 17 bytes execute_optimized_program(const unsigned char * 0x02108fb0, int 2, long * 0x020c2e30) line 744 + 16 bytes funcall_compiled_function(long 34186340, int 0, long * 0x0082e7fc) line 516 + 53 bytes Ffuncall(int 1, long * 0x0082e7f8) line 3523 + 17 bytes execute_optimized_program(const unsigned char * 0x020fe150, int 2, long * 0x01e6f510) line 744 + 16 bytes funcall_compiled_function(long 31008124, int 0, long * 0x0082ebd8) line 516 + 53 bytes Ffuncall(int 1, long * 0x0082ebd4) line 3523 + 17 bytes run_hook_with_args_in_buffer(buffer * 0x022fde00, int 1, long * 0x0082ebd4, int 0) line 3980 + 13 bytes run_hook_with_args(int 1, long * 0x0082ebd4, int 0) line 3993 + 23 bytes Frun_hooks(int 1, long * 0x0082ebd4) line 3847 + 19 bytes Ffuncall(int 2, long * 0x0082ebd0) line 3509 + 14 bytes execute_optimized_program(const unsigned char * 0x01ef2210, int 5, long * 0x01da8e10) line 744 + 16 bytes funcall_compiled_function(long 31020440, int 2, long * 0x0082eeb8) line 516 + 53 bytes Ffuncall(int 3, long * 0x0082eeb4) line 3523 + 17 bytes execute_optimized_program(const unsigned char * 0x0082f09c, int 3, long * 0x01d89390) line 744 + 16 bytes Fbyte_code(long 31102388, long 30970752, long 7) line 2392 + 38 bytes Feval(long 31087568) line 3290 + 187 bytes condition_case_1(long 30961240, long (long)* 0x01087232 Feval(long), long 31087568, long (long, long)* 0x01084764 run_condition_case_handlers(long, long), long 28510180) line 1692 + 7 bytes condition_case_3(long 31087568, long 28510180, long 30961240) line 1779 + 27 bytes execute_rare_opcode(long * 0x0082f450, const unsigned char * 0x01ef23ec, int 143) line 1269 + 19 bytes execute_optimized_program(const unsigned char * 0x01ef2310, int 6, long * 0x01da8f10) line 654 + 17 bytes funcall_compiled_function(long 31020412, int 1, long * 0x0082f740) line 516 + 53 bytes Ffuncall(int 2, long * 0x0082f73c) line 3523 + 17 bytes execute_optimized_program(const unsigned char * 0x020fe650, int 3, long * 0x01d8c490) line 744 + 16 bytes funcall_compiled_function(long 31020020, int 2, long * 0x0082fa14) line 516 + 53 bytes Ffuncall(int 3, long * 0x0082fa10) line 3523 + 17 bytes Fcall_interactively(long 29685180, long 28377092, long 28377092) line 1008 + 22 bytes Fcommand_execute(long 29685180, long 28377092, long 28377092) line 2929 + 17 bytes execute_command_event(command_builder * 0x01be1900, long 36626492) line 4048 + 25 bytes Fdispatch_event(long 36626492) line 4341 + 70 bytes Fcommand_loop_1() line 582 + 9 bytes command_loop_1(long 28377092) line 495 condition_case_1(long 28377188, long (long)* 0x01064fb9 command_loop_1(long), long 28377092, long (long, long)* 0x010649d0 cmd_error(long, long), long 28377092) line 1692 + 7 bytes command_loop_3() line 256 + 35 bytes command_loop_2(long 28377092) line 269 internal_catch(long 28457612, long (long)* 0x01064b20 command_loop_2(long), long 28377092, int * volatile 0x00000000) line 1317 + 7 bytes initial_command_loop(long 28377092) line 305 + 25 bytes STACK_TRACE_EYE_CATCHER(int 1, char * * 0x01b63ff0, char * * 0x01ca5300, int 0) line 2501 main(int 1, char * * 0x01b63ff0, char * * 0x01ca5300) line 2938 XEMACS! mainCRTStartup + 180 bytes _start() line 171 KERNEL32! BaseProcessStart@@4 + 115547 bytes @end example [explain dont_check_for_quit() et al] @node Profiling, Asynchronous Timeouts, Control-G (Quit) Checking, Asynchronous Events; Quit Checking @section Profiling @cindex profiling @cindex SIGPROF We implement our own profiling scheme so that we can determine things like which Lisp functions are occupying the most time. Any standard OS-provided profiling works on C functions, which is not always that useful -- and inconvenient, since it requires compiling with profile info and can't be retrieved dynamically, as XEmacs is running. The basic idea is simple. We set a profiling timer using setitimer (ITIMER_PROF), which generates a SIGPROF every so often. (This runs not in real time but rather when the process is executing or the system is running on behalf of the process -- at least, that is the case under Unix. Under MS Windows and Cygwin, there is no @code{setitimer()}, so we simulate it using multimedia timers, which run in real time. To make the results a bit more realistic, we ignore ticks that go off while blocking on an event wait. Note that Cygwin does provide a simulation of @code{setitimer()}, but it's in real time anyway, since Windows doesn't provide a way to have process-time timers, and furthermore, it's broken, so we don't use it.) When the signal goes off, we see what we're in, and add 1 to the count associated with that function. It would be nice to use the Lisp allocation mechanism etc. to keep track of the profiling information (i.e. to use Lisp hash tables), but we can't because that's not safe -- updating the timing information happens inside of a signal handler, so we can't rely on not being in the middle of Lisp allocation, garbage collection, @code{malloc()}, etc. Trying to make it work would be much more work than it's worth. Instead we use a basic (non-Lisp) hash table, which will not conflict with garbage collection or anything else as long as it doesn't try to resize itself. Resizing itself, however (which happens as a result of a @code{puthash()}), could be deadly. To avoid this, we make sure, at points where it's safe (e.g. @code{profile_record_about_to_call()} -- recording the entry into a function call), that the table always has some breathing room in it so that no resizes will occur until at least that many items are added. This is safe because any new item to be added in the sigprof would likely have the @code{profile_record_about_to_call()} called just before it, and the breathing room is checked. In general: any entry that the sigprof handler puts into the table comes from a backtrace frame (except "Processing Events at Top Level", and there's only one of those). Either that backtrace frame was added when profiling was on (in which case @code{profile_record_about_to_call()} was called and the breathing space updated), or when it was off -- and in this case, no such frames can have been added since the last time @code{start-profile} was called, so when @code{start-profile} is called we make sure there is sufficient breathing room to account for all entries currently on the stack. Jan 1998: In addition to timing info, I have added code to remember call counts of Lisp funcalls. The @code{profile_increase_call_count()} function is called from @code{Ffuncall()}, and serves to add data to Vcall_count_profile_table. This mechanism is much simpler and independent of the SIGPROF-driven one. It uses the Lisp allocation mechanism normally, since it is not called from a handler. It may even be useful to provide a way to turn on only one profiling mechanism, but I haven't done so yet. --hniksic Dec 2002: Total overhaul of the interface, making it sane and easier to use. --ben Feb 2003: Lots of rewriting of the internal code. Add GC-consing-usage, total GC usage, and total timing to the information tracked. Track profiling overhead and allow the ability to have internal sections (e.g. internal-external conversion, byte-char conversion) that are treated like Lisp functions for the purpose of profiling. --ben BEWARE: If you are modifying this file, be @strong{very} careful. Correctly implementing the "total" values is very tricky due to the possibility of recursion and of functions already on the stack when starting to profile/still on the stack when stopping. @node Asynchronous Timeouts, Exiting, Profiling, Asynchronous Events; Quit Checking @section Asynchronous Timeouts @cindex asynchronous timeouts @node Exiting, , Asynchronous Timeouts, Asynchronous Events; Quit Checking @section Exiting @cindex exiting @cindex crash @cindex hang @cindex core dump @cindex Armageddon @cindex exits, expected and unexpected @cindex unexpected exits @cindex expected exits Ben's capsule summary about expected and unexpected exits from XEmacs. Expected exits occur when the user directs XEmacs to exit, for example by pressing the close button on the only frame in XEmacs, or by typing @kbd{C-x C-c}. This runs @code{save-buffers-kill-emacs}, which saves any necessary buffers, and then exits using the primitive @code{kill-emacs}. However, unexpected exits occur in a few different ways: @itemize @bullet @item A memory access violation or other hardware-generated exception occurs. This is the worst possible problem to deal with, because the fault can occur while XEmacs is in any state whatsoever, even quite unstable ones. As a result, we need to be @strong{extremely} careful what we do. @item We are using one X display (or if we've used more, we've closed the others already), and some hardware or other problem happens and suddenly we've lost our connection to the display. In this situation, things are not so dire as in the last one; our code itself isn't trashed, so we can continue execution as normal, after having set things up so that we can exit at the appropriate time. Our exit still needs to be of the emergency nature; we have no displays, so any attempts to use them will fail. We simply want to auto-save (the single most important thing to do during shut-down), do minimal cleanup of stuff that has an independent existence outside of XEmacs, and exit. @end itemize Currently, both unexpected exit scenarios described above set @code{preparing_for_armageddon} to indicate that nonessential and possibly dangerous things should not be done, specifically: @itemize @minus @item no garbage collection. @item no hooks are run. @item no messages of any sort from autosaving. @item autosaving tries harder, ignoring certain failures. @item existing frames are not deleted. @end itemize (Also, all places that set @code{preparing_for_armageddon} also set @code{dont_check_for_quit}. This happens separately because it's also necessary to set other variables to make absolutely sure no quitting happens.) In the first scenario above (the access violation), we also set @code{fatal_error_in_progress}. This causes more things to not happen: @itemize @minus @item assertion failures do not abort. @item printing code does not do code conversion or gettext when printing to stdout/stderr. @end itemize @node Lstreams, Subprocesses, Asynchronous Events; Quit Checking, Top @chapter Lstreams @cindex lstreams An @dfn{lstream} is an internal Lisp object that provides a generic buffering stream implementation. Conceptually, you send data to the stream or read data from the stream, not caring what's on the other end of the stream. The other end could be another stream, a file descriptor, a stdio stream, a fixed block of memory, a reallocating block of memory, etc. The main purpose of the stream is to provide a standard interface and to do buffering. Macros are defined to read or write characters, so the calling functions do not have to worry about blocking data together in order to achieve efficiency. @menu * Creating an Lstream:: Creating an lstream object. * Lstream Types:: Different sorts of things that are streamed. * Lstream Functions:: Functions for working with lstreams. * Lstream Methods:: Creating new lstream types. @end menu @node Creating an Lstream, Lstream Types, Lstreams, Lstreams @section Creating an Lstream @cindex lstream, creating an Lstreams come in different types, depending on what is being interfaced to. Although the primitive for creating new lstreams is @code{Lstream_new()}, generally you do not call this directly. Instead, you call some type-specific creation function, which creates the lstream and initializes it as appropriate for the particular type. All lstream creation functions take a @var{mode} argument, specifying what mode the lstream should be opened as. This controls whether the lstream is for input and output, and optionally whether data should be blocked up in units of MULE characters. Note that some types of lstreams can only be opened for input; others only for output; and others can be opened either way. #### Richard Mlynarik thinks that there should be a strict separation between input and output streams, and he's probably right. @var{mode} is a string, one of @table @code @item "r" Open for reading. @item "w" Open for writing. @item "rc" Open for reading, but ``read'' never returns partial MULE characters. @item "wc" Open for writing, but never writes partial MULE characters. @end table @node Lstream Types, Lstream Functions, Creating an Lstream, Lstreams @section Lstream Types @cindex lstream types @cindex types, lstream @table @asis @item stdio @item filedesc @item lisp-string @item fixed-buffer @item resizing-buffer @item dynarr @item lisp-buffer @item print @item decoding @item encoding @end table @node Lstream Functions, Lstream Methods, Lstream Types, Lstreams @section Lstream Functions @cindex lstream functions @deftypefun {Lstream *} Lstream_new (Lstream_implementation *@var{imp}, const char *@var{mode}) Allocate and return a new Lstream. This function is not really meant to be called directly; rather, each stream type should provide its own stream creation function, which creates the stream and does any other necessary creation stuff (e.g. opening a file). @end deftypefun @deftypefun void Lstream_set_buffering (Lstream *@var{lstr}, Lstream_buffering @var{buffering}, int @var{buffering_size}) Change the buffering of a stream. See @file{lstream.h}. By default the buffering is @code{STREAM_BLOCK_BUFFERED}. @end deftypefun @deftypefun int Lstream_flush (Lstream *@var{lstr}) Flush out any pending unwritten data in the stream. Clear any buffered input data. Returns 0 on success, -1 on error. @end deftypefun @deftypefn Macro int Lstream_putc (Lstream *@var{stream}, int @var{c}) Write out one byte to the stream. This is a macro and so it is very efficient. The @var{c} argument is only evaluated once but the @var{stream} argument is evaluated more than once. Returns 0 on success, -1 on error. @end deftypefn @deftypefn Macro int Lstream_getc (Lstream *@var{stream}) Read one byte from the stream. This is a macro and so it is very efficient. The @var{stream} argument is evaluated more than once. Return value is -1 for EOF or error. @end deftypefn @deftypefn Macro void Lstream_ungetc (Lstream *@var{stream}, int @var{c}) Push one byte back onto the input queue. This will be the next byte read from the stream. Any number of bytes can be pushed back and will be read in the reverse order they were pushed back---most recent first. (This is necessary for consistency---if there are a number of bytes that have been unread and I read and unread a byte, it needs to be the first to be read again.) This is a macro and so it is very efficient. The @var{c} argument is only evaluated once but the @var{stream} argument is evaluated more than once. @end deftypefn @deftypefun int Lstream_fputc (Lstream *@var{stream}, int @var{c}) @deftypefunx int Lstream_fgetc (Lstream *@var{stream}) @deftypefunx void Lstream_fungetc (Lstream *@var{stream}, int @var{c}) Function equivalents of the above macros. @end deftypefun @deftypefun Bytecount Lstream_read (Lstream *@var{stream}, void *@var{data}, Bytecount @var{size}) Read @var{size} bytes of @var{data} from the stream. Return the number of bytes read. 0 means EOF. -1 means an error occurred and no bytes were read. @end deftypefun @deftypefun Bytecount Lstream_write (Lstream *@var{stream}, void *@var{data}, Bytecount @var{size}) Write @var{size} bytes of @var{data} to the stream. Return the number of bytes written. -1 means an error occurred and no bytes were written. @end deftypefun @deftypefun void Lstream_unread (Lstream *@var{stream}, void *@var{data}, Bytecount @var{size}) Push back @var{size} bytes of @var{data} onto the input queue. The next call to @code{Lstream_read()} with the same size will read the same bytes back. Note that this will be the case even if there is other pending unread data. @end deftypefun @deftypefun int Lstream_close (Lstream *@var{stream}) Close the stream. All data will be flushed out. @end deftypefun @deftypefun void Lstream_reopen (Lstream *@var{stream}) Reopen a closed stream. This enables I/O on it again. This is not meant to be called except from a wrapper routine that reinitializes variables and such---the close routine may well have freed some necessary storage structures, for example. @end deftypefun @deftypefun void Lstream_rewind (Lstream *@var{stream}) Rewind the stream to the beginning. @end deftypefun @node Lstream Methods, , Lstream Functions, Lstreams @section Lstream Methods @cindex lstream methods @deftypefn {Lstream Method} Bytecount reader (Lstream *@var{stream}, unsigned char *@var{data}, Bytecount @var{size}) Read some data from the stream's end and store it into @var{data}, which can hold @var{size} bytes. Return the number of bytes read. A return value of 0 means no bytes can be read at this time. This may be because of an EOF, or because there is a granularity greater than one byte that the stream imposes on the returned data, and @var{size} is less than this granularity. (This will happen frequently for streams that need to return whole characters, because @code{Lstream_read()} calls the reader function repeatedly until it has the number of bytes it wants or until 0 is returned.) The lstream functions do not treat a 0 return as EOF or do anything special; however, the calling function will interpret any 0 it gets back as EOF. This will normally not happen unless the caller calls @code{Lstream_read()} with a very small size. This function can be @code{NULL} if the stream is output-only. @end deftypefn @deftypefn {Lstream Method} Bytecount writer (Lstream *@var{stream}, const unsigned char *@var{data}, Bytecount @var{size}) Send some data to the stream's end. Data to be sent is in @var{data} and is @var{size} bytes. Return the number of bytes sent. This function can send and return fewer bytes than is passed in; in that case, the function will just be called again until there is no data left or 0 is returned. A return value of 0 means that no more data can be currently stored, but there is no error; the data will be squirreled away until the writer can accept data. (This is useful, e.g., if you're dealing with a non-blocking file descriptor and are getting @code{EWOULDBLOCK} errors.) This function can be @code{NULL} if the stream is input-only. @end deftypefn @deftypefn {Lstream Method} int rewinder (Lstream *@var{stream}) Rewind the stream. If this is @code{NULL}, the stream is not seekable. @end deftypefn @deftypefn {Lstream Method} int seekable_p (Lstream *@var{stream}) Indicate whether this stream is seekable---i.e. it can be rewound. This method is ignored if the stream does not have a rewind method. If this method is not present, the result is determined by whether a rewind method is present. @end deftypefn @deftypefn {Lstream Method} int flusher (Lstream *@var{stream}) Perform any additional operations necessary to flush the data in this stream. @end deftypefn @deftypefn {Lstream Method} int pseudo_closer (Lstream *@var{stream}) @end deftypefn @deftypefn {Lstream Method} int closer (Lstream *@var{stream}) Perform any additional operations necessary to close this stream down. May be @code{NULL}. This function is called when @code{Lstream_close()} is called or when the stream is garbage-collected. When this function is called, all pending data in the stream will already have been written out. @end deftypefn @deftypefn {Lstream Method} Lisp_Object marker (Lisp_Object @var{lstream}, void (*@var{markfun}) (Lisp_Object)) Mark this object for garbage collection. Same semantics as a standard @code{Lisp_Object} marker. This function can be @code{NULL}. @end deftypefn @node Subprocesses, Interface to MS Windows, Lstreams, Top @chapter Subprocesses @cindex subprocesses The fields of a process are: @table @code @item name A string, the name of the process. @item command A list containing the command arguments that were used to start this process. @item filter A function used to accept output from the process instead of a buffer, or @code{nil}. @item sentinel A function called whenever the process receives a signal, or @code{nil}. @item buffer The associated buffer of the process. @item pid An integer, the Unix process @sc{id}. @item childp A flag, non-@code{nil} if this is really a child process. It is @code{nil} for a network connection. @item mark A marker indicating the position of the end of the last output from this process inserted into the buffer. This is often but not always the end of the buffer. @item kill_without_query If this is non-@code{nil}, killing XEmacs while this process is still running does not ask for confirmation about killing the process. @item raw_status_low @itemx raw_status_high These two fields record 16 bits each of the process status returned by the @code{wait} system call. @item status The process status, as @code{process-status} should return it. @item tick @itemx update_tick If these two fields are not equal, a change in the status of the process needs to be reported, either by running the sentinel or by inserting a message in the process buffer. @item pty_flag Non-@code{nil} if communication with the subprocess uses a @sc{pty}; @code{nil} if it uses a pipe. @item infd The file descriptor for input from the process. @item outfd The file descriptor for output to the process. @item subtty The file descriptor for the terminal that the subprocess is using. (On some systems, there is no need to record this, so the value is @code{-1}.) @item tty_name The name of the terminal that the subprocess is using, or @code{nil} if it is using pipes. @end table @node Interface to MS Windows, Interface to the X Window System, Subprocesses, Top @chapter Interface to MS Windows @cindex MS Windows, interface to @cindex Windows, interface to @menu * Different kinds of Windows environments:: * Windows Build Flags:: * Windows I18N Introduction:: * Modules for Interfacing with MS Windows:: @end menu @node Different kinds of Windows environments, Windows Build Flags, Interface to MS Windows, Interface to MS Windows @section Different kinds of Windows environments @cindex different kinds of Windows environments @cindex Windows environments, different kinds of @cindex MS Windows environments, different kinds of @subsubheading (a) operating system (OS) vs. window system vs. Win32 API vs. C runtime library (CRT) vs. and compiler There are various Windows operating systems (Windows NT, 2000, XP, 95, 98, ME, etc.), which come in two basic classes: Windows NT (NT, 2000, XP, and all future versions) and 9x (95, 98, ME). 9x-class operating systems are a kind of hodgepodge of a 32-bit upper layer on top of a 16-bit MS-DOS-compatible lower layer. NT-class operating systems are written from the ground up as 32-bit (there are also 64-bit versions available now), and provide many more features and much greater stability, since there is full memory protection between all processes and the between processes and the system. NT-class operating systems also provide emulation for DOS programs inside of a "sandbox" (i.e. a walled-off environment in which one DOS program can screw up another one, but there is theoretically no way for a DOS program to screw up the OS itself). From the perspective of XEmacs, the different between NT and 9x is very important in Unicode support (not really provided under 9x -- see @file{intl-win32.c}) and subprocess creation, among other things. The operating system provides the framework for accessing files and devices and running programs. From the perspective of a program, the operating system provides a set of services. At the lowest level, the way to call these services is dependent on the processor the OS is running on, but a portable interface is provided to C programs through functions called "system calls". Under Windows, this interface is called the Win32 API, and includes file-manipulation calls such as @code{CreateFile()} and @code{ReadFile()}, process-creation calls such as @code{CreateProcess()}, etc. This concept of system calls goes back to Unix, where similar services are available but through routines with different, simpler names, such as @code{open()}, @code{read()}, @code{fork()}, @code{execve()}, etc. In addition, Unix provides a higher layer of routines, called the C Runtime Library (CRT), which provide higher-level, more convenient versions of the same services (e.g. "stream-oriented" file routines such as @code{fopen()} and @code{fread()}) as well as various other utility functions, such as string-manipulation routines (e.g. @code{strcpy()} and @code{strcmp()}). For compatibility, a C Runtime Library (CRT) is also provided under Windows, which provides a partial implementation of both the Unix CRT and the Unix system-call API, implemented using the Win32 API. The CRT sources come with Visual C++ (VC++). For example, under VC++ 6, look in the CRT/SRC directory, e.g. for me (ben): /Program Files/Microsoft Visual Studio/VC98/CRT/SRC. The CRT is provided using either MSVCRT (dynamically linked) or @file{LIBC.LIB} (statically linked). The window system provides the framework for creating overlapped windows and unifying signals provided by various devices (input devices such as the keyboard and mouse, timers, etc.) into a single event queue (or "message queue", under Windows). Like the operating system, the window system can be viewed from the perspective of a program as a set of services provided by an API of function calls. Under Windows, window-system services are also available through the Win32 API, while under UNIX the window system is typically a separate component (e.g. the X Windowing System, aka X Windows or X11). The term "GUI" ("graphical user interface") is often used to refer to the services provided by the window system, or to a windowing interface provided by a program. The Win32 API is implemented by various dynamic libraries, or DLL's. The most important are KERNEL32, USER32, and GDI32. KERNEL32 implements the basic file-system and process services. USER32 implements the fundamental window-system services such as creating windows and handling messages. GDI32 implements higher-level drawing capabilities -- fonts, colors, lines, etc. C programs are compiled into executables using a compiler. Under Unix, a compiler usually comes as part of the operating system, but not under Windows, where the compiler is a separate product. Even under Unix, people often install their own compilers, such as gcc. Under Windows, the Microsoft-standard compiler is Visual C++ (VC++). It is possible to provide an emulation of any API using any other, as long as the underlying API provides the suitable functionality. This is what Cygwin (www.cygwin.com) does. It provides a fairly complete POSIX emulation layer (POSIX is a government standard for Unix behavior) on top of MS Windows -- in particular, providing the file-system, process, tty, and signal semantics that are part of a modern, standard Unix operating system. Cygwin does this using its own DLL, @file{cygwin1.dll}, which makes calls to the Win32 API services in @file{kernel32.dll}. Cygwin also provides its own implementation of the C runtime library, called @code{newlib} (@file{libcygwin.a}; @file{libc.a} and @file{libm.a} are symlinked to it), which is implemented on top of the Unix system calls provided in @file{cygwin1.dll}. In addition, Cygwin provides static import libraries that give you direct access to the Win32 API -- XEmacs uses this to provide GUI support under Cygwin. Cygwin provides a version of GCC (the GNU Project C compiler) that is set up to automatically link with the appropriate Cygwin libraries. Cygwin also provides, as optional components, pre-compiled binaries for a great number of open-source programs compiled under the Cygwin environment. This includes all of the standard Unix file-system, text-manipulation, development, networking, database, etc. utilities, a version of X Windows that uses the Win32 API underlyingly (see below), and compilations of nearly all other common open-source packages (Apache, TeX, [X]Emacs, Ghostscript, GTK, ImageMagick, etc.). Similarly, you can emulate the functionality of X Windows using the Win32 component of the Win32 API. Cygwin provides a package to do this, from the XFree86 project. Other versions of X under Windows also exist, such as the MicroImages MI/X server. Each version potentially can come comes with its own header and library files, allowing you to compile X-Windows programs. All of these different operating system and emulation layers can make for a fair amount of confusion, so: @subsubheading (b) CRT is not the same as VC++ Note that the CRT is @strong{NOT} (completely) part of VC++. True, if you link statically, the CRT (in the form of @file{LIBC.LIB}, which comes with VC++) will be inserted into the executable (.EXE), but otherwise the CRT will be separate. The dynamic version of the CRT is provided by @file{MSVCRT.DLL} (or @file{MSVCRTD.DLL}, for debugging), which comes with Windows. Hence, it's possible to use a different compiler and still link with MSVCRT -- which is exactly what MinGW does. @subsubheading (c) CRT is not the same as the Win32 API Note also that the CRT is totally separate from the Win32 API. They provide different functions and are implemented in different DLL's. They are also different levels -- the CRT is implemented on top of Win32. Sometimes the CRT and Win32 both have their own versions of similar concepts, such as locales. These are typically maintained separately, and can get out of sync. Do not assume that changing a setting in the CRT will have any effect on Win32 API routines using a similar concept unless the CRT docs specifically say so. Do not assume that behavior described for CRT functions applies to Win32 API or vice-versa. Note also that the CRT knows about and is implemented on top of the Win32 API, while the Win32 API knows nothing about the CRT. @subsubheading (d) MinGW is not the same as Cygwin As described in (b), Microsoft's version of the CRT (@file{MSVCRT.DLL}) is provided as part of Windows, separate from VC++, which must be purchased. Hence, it is possible to write MSVCRT to provide CRT services without using VC++. This is what MinGW (www.mingw.org) does -- it is a port of GCC that will use MSVCRT. The reason one might want to do this is (a) it is free, and (b) it does not require a separately installed DLL, as Cygwin does. (#### Maybe MinGW targets CRTDLL, not MSVCRT? If so, what is CRTDLL, and how does it differ from MSVCRT and @file{LIBC.LIB}?) Primarily, what MinGW provides is patches to GCC (now integrated into the standard distribution) and its own header files and import libraries that are compatible with MSVCRT. The best way to think of MinGW is as simply another Windows compiler, like how there used to be Microsoft and Borland compilers. Because MinGW programs use all the same libraries as VC++ programs, and hence the same services are available, programs that compile under VC++ should compile under MinGW with very little change, whereas programs that compile under Cygwin will look quite different. The confusion between MinGW and Cygwin is the confusion between the environment that a compiler runs under and the target environment of a program, i.e. the environment that a program is compiled to run under. It's theoretically possible, for example, to compile a program under Windows and generate a binary that can only be run under Linux, or vice-versa -- or, for that matter, to use Windows, running on an Intel machine to write and a compile a program that will run on the Mac OS, running on a PowerPC machine. This is called cross-compiling, and while it may seem rather esoteric, it is quite normal when you want to generate a program for a machine that you cannot develop on -- for example, a program that will run on a Palm Pilot. Originally, this is how MinGW worked -- you needed to run GCC under a Cygwin environment and give it appropriate flags, telling it to use the MinGW headers and target @file{MSVCRT.DLL} rather than @file{CYGWIN1.DLL}. (In fact, Cygwin standardly comes with MinGW's header files.) This was because GCC was written with Unix in mind and relied on a large amount of Unix-specific functionality. To port GCC to Windows without using a POSIX emulation layer would mean a lot of rewriting of GCC. Eventually, however, this was done, and it GCC was itself compiled using MinGW. The result is that currently you can develop MinGW applications either under Cygwin or under native Windows. @subsubheading (e) Operating system is not the same as window system As per the above discussion, we can use either Native Windows (the OS part of Win32 provided by @file{KERNEL32.DLL} and the Windows CRT as provided by MSVCRT or CLL) or Cygwin to provide operating-system functionality, and we can use either Native Windows (the windowing part of Win32 as provided by @file{USER32.DLL} and @file{GDI32.DLL}) or X11 to provide window-system functionality. This gives us four possible build environments. It's currently possible to build XEmacs with at least three of these combinations -- as far as I know native + X11 is no longer supported, although it used to be (support used to exist in @file{xemacs.mak} for linking with some X11 libraries available from somewhere, but it was bit-rotting and you could always use Cygwin; #### what happens if we try to compile with MinGW, native OS + X11?). This may still seem confusing, so: @table @asis @item Native OS + native windowing We call @code{CreateProcess()} to run subprocesses (@file{process-nt.c}), and @code{CreateWindowEx()} to create a top-level window (@file{frame-msw.c}). We use @file{nt/xemacs.mak} to compile with VC++, linking with the Windows CRT (@file{MSVCRT.DLL} or @file{LIBC.LIB}) and with the various Win32 DLL's (@file{KERNEL32.DLL}, @file{USER32.DLL}, @file{GDI32.DLL}); or we use @file{src/Makefile[.in.in]} to compile with GCC, telling it (e.g. -mno-cygwin, see @file{s/mingw32.h}) to use MinGW (which will end up linking with @file{MSVCRT.DLL}), and linking GCC with -lshell32 -lgdi32 -luser32 etc. (see @file{configure.in}). @item Cygwin + native windowing We call @code{fork()}/@code{execve()} to run subprocesses (@file{process-unix.c}), and @code{CreateWindowEx()} to create a top-level window (@file{frame-msw.c}). We use @file{src/Makefile[in.in]} to compile with GCC (it will end up linking with @file{CYGWIN1.DLL}) and link GCC with -lshell32 -lgdi32 -luser32 etc. (see @file{configure.in}). @item Cygwin + X11 We call @code{fork()}/@code{execve()} to run subprocesses (@file{process-unix.c}), and @code{XtCreatePopupShell()} to create a top-level window (@file{frame-x.c}). We use @file{src/Makefile[.in.in]} to compile with GCC (it will end up linking with @file{CYGWIN1.DLL}) and link GCC with -lXt, -lX11, etc. (see @file{configure.in}). Finally, if native OS + X11 were possible, it might look something like @item [Native OS + X11] We call @code{CreateProcess()} to run subprocesses (@file{process-nt.c}), and @code{XtCreatePopupShell()} to create a top-level window (@file{frame-x.c}). We use @file{nt/xemacs.mak} to compile with VC++, linking with the Windows CRT (@file{MSVCRT.DLL} or @file{LIBC.LIB}) and with the various X11 DLL's (@file{XT.DLL}, @file{XLIB.DLL}, etc.); or we use @file{src/Makefile[.in.in]} to compile with GCC, telling it (e.g. -mno-cygwin, see @file{s/mingw32.h}) to use MinGW (which will end up linking with @file{MSVCRT.DLL}), and linking GCC with -lXt, -lX11, etc. (see @file{configure.in}). @end table One of the reasons that we maintain the ability to build under Cygwin and X11 on Windows, when we have native support, is that it allows Windows compilers to test under a Unix-like environment. @node Windows Build Flags, Windows I18N Introduction, Different kinds of Windows environments, Interface to MS Windows @section Windows Build Flags @cindex Windows build flags @cindex MS Windows build flags @cindex build flags, Windows @table @code @item CYGWIN for Cygwin-only stuff. @item WIN32_NATIVE Win32 native OS-level stuff (files, process, etc.). Applies whenever linking against the native C libraries -- i.e. all compilations with VC++ and with MINGW, but never Cygwin. @item HAVE_X_WINDOWS for X Windows (regardless of whether under MS Win) @item HAVE_MS_WINDOWS MS Windows native windowing system (anything related to the appearance of the graphical screen). May or may not apply to any of VC++, MINGW, Cygwin. @end table Finally, there's also the MINGW build environment, which uses GCC (similar to Cygwin), but native MS Windows libraries rather than a POSIX emulation layer (the Cygwin approach). This environment defines WIN32_NATIVE, but also defines MINGW, which is used mostly because uses its own include files (related to Cygwin), which have a few things messed up. Formerly, we had a whole host of flags. Here's the conversion, for porting code from GNU Emacs and such: @c @multitable {Old Constant} {determine whether this code is really specific to MS-DOS (and not Windows -- e.g. DJGPP code} @multitable @columnfractions .25 .75 @item Old Constant @tab New Constant @item ---------------------------------------------------------------- @item @code{WINDOWSNT} @tab @code{WIN32_NATIVE} @item @code{WIN32} @tab @code{WIN32_NATIVE} @item @code{_WIN32} @tab @code{WIN32_NATIVE} @item @code{HAVE_WIN32} @tab @code{WIN32_NATIVE} @item @code{DOS_NT} @tab @code{WIN32_NATIVE} @item @code{HAVE_NTGUI} @tab @code{WIN32_NATIVE}, unless it ends up already bracketed by this @item @code{HAVE_FACES} @tab always true @item @code{MSDOS} @tab determine whether this code is really specific to MS-DOS (and not Windows -- e.g. DJGPP code); if so, delete the code; otherwise, convert to @code{WIN32_NATIVE} (we do not support MS-DOS w/DOS Extender under XEmacs) @item @code{__CYGWIN__} @tab @code{CYGWIN} @item @code{__CYGWIN32__} @tab @code{CYGWIN} @item @code{__MINGW32__} @tab @code{MINGW} @end multitable @node Windows I18N Introduction, Modules for Interfacing with MS Windows, Windows Build Flags, Interface to MS Windows @section Windows I18N Introduction @cindex Windows I18N @cindex I18N, Windows @cindex MS Windows I18N @strong{Abstract:} This page provides an overview of the aspects of the Win32 internationalization API that are relevant to XEmacs, including the basic distinction between multibyte and Unicode encodings. Also included are pointers to how XEmacs should make use of this API. The Win32 API is quite well-designed in its handling of strings encoded for various character sets. The API is geared around the idea that two different methods of encoding strings should be supported. These methods are called multibyte and Unicode, respectively. The multibyte encoding is compatible with ASCII strings and is a more efficient representation when dealing with strings containing primarily ASCII characters, but it has a great number of serious deficiencies and limitations, including that it is very difficult and error-prone to work with strings in this encoding, and any particular string in a multibyte encoding can only contain characters from a very limited number of character sets. The Unicode encoding rectifies all of these deficiencies, but it is not compatible with ASCII strings (in other words, an existing program will not be able to handle the encoded strings unless it is explicitly modified to do so), and it takes up twice as much memory space as multibyte encodings when encoding a purely ASCII string. Multibyte encodings use a variable number of bytes (either one or two) to represent characters. ASCII characters are also represented by a single byte with its high bit not set, and non-ASCII characters are represented by one or two bytes, the first of which always has its high bit set. (The second byte, when it exists, may or may not have its high bit set.) There is no single multibyte encoding. Instead, there is generally one encoding per non-ASCII character set. Such an encoding is capable of representing (besides ASCII characters, of course) only characters from one (or possibly two) particular character sets. Multibyte encoding makes processing of strings very difficult. For example, given a pointer to the beginning of a character within a string, finding the pointer to the beginning of the previous character may require backing up all the way to the beginning of the string, and then moving forward. Also, an operation such as separating out the components of a path by searching for backslashes will fail if it's implemented in the simplest (but not multibyte-aware) fashion, because it may find what appears to be a backslash, but which is actually the second byte of a two-byte character. Also, the limited number of character sets that any particular multibyte encoding can represent means that loss of data is likely if a string is converted from the XEmacs internal format into a multibyte format. For these reasons, the C code in XEmacs should never do any sort of work with multibyte encoded strings (or with strings in any external encoding for that matter). Strings should always be maintained in the internal encoding, which is predictable, and converted to an external encoding only at the point where the string moves from the XEmacs C code and enters a system library function. Similarly, when a string is returned from a system library function, it should be immediately converted into the internal coding before any operations are done on it. Unicode, unlike multibyte encodings, is a fixed-width encoding where every character is represented using 16 bits. It is also capable of encoding all the characters from all the character sets in common use in the world. The predictability and completeness of the Unicode encoding makes it a very good encoding for strings that may contain characters from many character sets mixed up with each other. At the same time, of course, it is incompatible with routines that expect ASCII characters and also incompatible with general string manipulation routines, which will encounter a great number of what would appear to be embedded nulls in the string. It also takes twice as much room to encode strings containing primarily ASCII characters. This is why XEmacs does not use Unicode or similar encoding internally for buffers. The Win32 API cleverly deals with the issue of 8 bit vs. 16 bit characters by declaring a type called @code{@dfn{TCHAR}} which specifies a generic character, either 8 bits or 16 bits. Generally @code{TCHAR} is defined to be the same as the simple C type @code{char}, unless the preprocessor constant @code{UNICODE} is defined, in which case @code{TCHAR} is defined to be @code{WCHAR}, which is a 16 bit type. Nearly all functions in the Win32 API that take strings are defined to take strings that are actually arrays of @code{TCHAR}s. There is a type @code{LPTSTR} which is defined to be a string of @code{TCHAR}s and another type @code{LPCTSTR} which is a const string of @code{TCHAR}s. The theory is that any program that uses @code{TCHAR}s exclusively to represent characters and does not make assumptions about the size of a @code{TCHAR} or the way that the characters are encoded should work transparently regardless of whether the @code{UNICODE} preprocessor constant is defined, which is to say, regardless of whether 8 bit multibyte or 16 bit Unicode characters are being used. The way that this is actually implemented is that every Win32 API function that takes a string as an argument actually maps to one of two functions which are suffixed with an @code{A} (which stands for ANSI, and means multibyte strings) or @code{W} (which stands for wide, and means Unicode strings). The mapping is, of course, controlled by the same @code{UNICODE} preprocessor constant. Generally all structures containing strings in them actually map to one of two different kinds of structures, with either an @code{A} or a @code{W} suffix after the structure name. Unfortunately, not all of the implementations of the Win32 API implement all of the functionality described above. In particular, Windows 95 does not implement very much Unicode functionality. It does implement functions to convert multibyte-encoded strings to and from Unicode strings, and provides Unicode versions of certain low-level functions like @code{ExtTextOut()}. In fact, all of the rest of the Unicode versions of API functions are just stubs that return an error. Conversely, all versions of Windows NT completely implement all the Unicode functionality, but some versions (especially versions before Windows NT 4.0) don't implement much of the multibyte functionality. For this reason, as well as for general code cleanliness, XEmacs needs to be written in such a way that it works with or without the @code{UNICODE} preprocessor constant being defined. Getting XEmacs to run when all strings are Unicode primarily involves removing any assumptions made about the size of characters. Remember what I said earlier about how the point of conversion between internally and externally encoded strings should occur at the point of entry or exit into or out of a library function. With this in mind, an externally encoded string in XEmacs can be treated simply as an arbitrary sequence of bytes of some length which has no particular relationship to the length of the string in the internal encoding. #### The rest of this is @strong{out-of-date} and needs to be written to reference the actual coding systems or aliases that we currently use. [[ To facilitate this, the enum @code{external_data_format}, which is declared in @file{lisp.h}, is expanded to contain three new formats, which are @code{FORMAT_LOCALE}, @code{FORMAT_UNICODE} and @code{FORMAT_TSTR}. @code{FORMAT_LOCALE} always causes encoding into a multibyte string consistent with the encoding of the current locale. The functions to handle locales are different under Unix and Windows and locales are a process property under Unix and a thread property under Windows, but the concepts are basically the same. @code{FORMAT_UNICODE} of course causes encoding into Unicode and @code{FORMAT_TSTR} logically maps to either @code{FORMAT_LOCALE} or @code{FORMAT_UNICODE} depending on the @code{UNICODE} preprocessor constant. Under Unix the behavior of @code{FORMAT_TSTR} is undefined and this particular format should not be used. Under Windows however @code{FORMAT_TSTR} should be used for pretty much all of the Win32 API calls. The other two formats should only be used in particular APIs that specifically call for a multibyte or Unicode encoded string regardless of the @code{UNICODE} preprocessor constant. String constants that are to be passed directly to Win32 API functions, such as the names of window classes, need to be bracketed in their definition with a call to the macro @code{TEXT}. This awfully named macro, which comes out of the Win32 API, appropriately makes a string of either regular or wide chars, which is to say this string may be prepended with an @code{L} (causing it to be a wide string) depending on the @code{UNICODE} preprocessor constant. By the way, if you're wondering what happened to @code{FORMAT_OS}, I think that this format should go away entirely because it is too vague and should be replaced by more specific formats as they are defined. ]] Use Qnative for Unix conversion, Qmswindows_tstr for Windows ... String constants that are to be passed directly to Win32 API functions, such as the names of window classes, need to be bracketed in their definition with a call to the macro XETEXT. This appropriately makes a string of either regular or wide chars, which is to say this string may be prepended with an L (causing it to be a wide string) depending on XEUNICODE_P. @node Modules for Interfacing with MS Windows, , Windows I18N Introduction, Interface to MS Windows @section Modules for Interfacing with MS Windows @cindex modules for interfacing with MS Windows @cindex interfacing with MS Windows, modules for @cindex MS Windows, modules for interfacing with @cindex Windows, modules for interfacing with There are two different general Windows-related include files in src. Uses are approximately: @table @file @item syswindows.h Wrapper around @file{<windows.h>}, including missing defines as necessary. Includes stuff needed on both Cygwin and native Windows, regardless of window system chosen. Includes definitions needed for Unicode conversion/encapsulation, and other Mule-related stuff, plus various other prototypes and Windows-specific, but not GUI-specific, stuff. @item console-msw.h Used on both Cygwin and native Windows, but only when native window system (as opposed to X) chosen. Includes @file{syswindows.h}. @end table Summary of files: @table @file @item console-msw.h include file for native windowing (otherwise, @file{console-x.h}, etc.) @item console-msw.c, frame-msw.c, etc. native windowing, as above @item process-nt.c subprocess support for native OS (otherwise, @file{process-unix.c}) @item nt.c support routines used under native OS @item win32.c support routines used under both OS environments @item syswindows.h support header for both environments @item nt/xemacs.mak Makefile for VC++ (otherwise, @file{src/Makefile.in.in}) @item s/windowsnt.h s header for basic native-OS defines, VC++ compiler @item s/mingw32.h s header for basic native-OS defines, GCC/MinGW compiler @item s/cygwin.h s header for basic Cygwin defines @item s/win32-native.h s header for basic native-OS defines, all compilers @item s/win32-common.h s header for defines for both OS environments @item intl-win32.c internationalization functions for both OS environments @item intl-encap-win32.c Unicode encapsulation functions for both OS environments @item intl-auto-encap-win32.c Auto-generated Unicode encapsulation functions @item intl-auto-encap-win32.h Auto-generated Unicode encapsulation headers @end table @node Interface to the X Window System, Dumping, Interface to MS Windows, Top @chapter Interface to the X Window System @cindex X Window System, interface to the Mostly undocumented. @menu * Lucid Widget Library:: An interface to various widget sets. * Modules for Interfacing with X Windows:: @end menu @node Lucid Widget Library, Modules for Interfacing with X Windows, Interface to the X Window System, Interface to the X Window System @section Lucid Widget Library @cindex Lucid Widget Library @cindex widget library, Lucid @cindex library, Lucid Widget Lwlib is extremely poorly documented and quite hairy. The author(s) blame that on X, Xt, and Motif, with some justice, but also sufficient hypocrisy to avoid drawing the obvious conclusion about their own work. The Lucid Widget Library is composed of two more or less independent pieces. The first, as the name suggests, is a set of widgets. These widgets are intended to resemble and improve on widgets provided in the Motif toolkit but not in the Athena widgets, including menubars and scrollbars. Recent additions by Andy Piper integrate some ``modern'' widgets by Edward Falk, including checkboxes, radio buttons, progress gauges, and index tab controls (aka notebooks). The second piece of the Lucid widget library is a generic interface to several toolkits for X (including Xt, the Athena widget set, and Motif, as well as the Lucid widgets themselves) so that core XEmacs code need not know which widget set has been used to build the graphical user interface. @menu * Generic Widget Interface:: The lwlib generic widget interface. * Scrollbars:: * Menubars:: * Checkboxes and Radio Buttons:: * Progress Bars:: * Tab Controls:: @end menu @node Generic Widget Interface, Scrollbars, Lucid Widget Library, Lucid Widget Library @subsection Generic Widget Interface @cindex widget interface, generic In general in any toolkit a widget may be a composite object. In Xt, all widgets have an X window that they manage, but typically a complex widget will have widget children, each of which manages a subwindow of the parent widget's X window. These children may themselves be composite widgets. Thus a widget is actually a tree or hierarchy of widgets. For each toolkit widget, lwlib maintains a tree of @code{widget_values} which mirror the hierarchical state of Xt widgets (including Motif, Athena, 3D Athena, and Falk's widget sets). Each @code{widget_value} has @code{contents} member, which points to the head of a linked list of its children. The linked list of siblings is chained through the @code{next} member of @code{widget_value}. @example +-----------+ | composite | +-----------+ | | contents V +-------+ next +-------+ next +-------+ | child |----->| child |----->| child | +-------+ +-------+ +-------+ | | contents V +-------------+ next +-------------+ | grand child |----->| grand child | +-------------+ +-------------+ The @code{widget_value} hierarchy of a composite widget with two simple children and one composite child. @end example The @code{widget_instance} structure maintains the inverse view of the tree. As for the @code{widget_value}, siblings are chained through the @code{next} member. However, rather than naming children, the @code{widget_instance} tree links to parents. @example +-----------+ | composite | +-----------+ A | parent | +-------+ next +-------+ next +-------+ | child |----->| child |----->| child | +-------+ +-------+ +-------+ A | parent | +-------------+ next +-------------+ | grand child |----->| grand child | +-------------+ +-------------+ The @code{widget_value} hierarchy of a composite widget with two simple children and one composite child. @end example This permits widgets derived from different toolkits to be updated and manipulated generically by the lwlib library. For instance @code{update_one_widget_instance} can cope with multiple types of widget and multiple types of toolkit. Each element in the widget hierarchy is updated from its corresponding @code{widget_value} by walking the @code{widget_value} tree. This has desirable properties. For example, @code{lw_modify_all_widgets} is called from @file{glyphs-x.c} and updates all the properties of a widget without having to know what the widget is or what toolkit it is from. Unfortunately this also has its hairy properties; the lwlib code quite complex. And of course lwlib has to know at some level what the widget is and how to set its properties. The @code{widget_instance} structure also contains a pointer to the root of its tree. Widget instances are further confi @node Scrollbars, Menubars, Generic Widget Interface, Lucid Widget Library @subsection Scrollbars @cindex scrollbars @node Menubars, Checkboxes and Radio Buttons, Scrollbars, Lucid Widget Library @subsection Menubars @cindex menubars @node Checkboxes and Radio Buttons, Progress Bars, Menubars, Lucid Widget Library @subsection Checkboxes and Radio Buttons @cindex checkboxes and radio buttons @cindex radio buttons, checkboxes and @cindex buttons, checkboxes and radio @node Progress Bars, Tab Controls, Checkboxes and Radio Buttons, Lucid Widget Library @subsection Progress Bars @cindex progress bars @cindex bars, progress @node Tab Controls, , Progress Bars, Lucid Widget Library @subsection Tab Controls @cindex tab controls @node Modules for Interfacing with X Windows, , Lucid Widget Library, Interface to the X Window System @section Modules for Interfacing with X Windows @cindex modules for interfacing with X Windows @cindex interfacing with X Windows, modules for @cindex X Windows, modules for interfacing with @example Emacs.ad.h @end example A file generated from @file{Emacs.ad}, which contains XEmacs-supplied fallback resources (so that XEmacs has pretty defaults). @example EmacsFrame.c EmacsFrame.h EmacsFrameP.h @end example These modules implement an Xt widget class that encapsulates a frame. This is for ease in integrating with Xt. The EmacsFrame widget covers the entire X window except for the menubar; the scrollbars are positioned on top of the EmacsFrame widget. @strong{Warning:} Abandon hope, all ye who enter here. This code took an ungodly amount of time to get right, and is likely to fall apart mercilessly at the slightest change. Such is life under Xt. @example EmacsManager.c EmacsManager.h EmacsManagerP.h @end example These modules implement a simple Xt manager (i.e. composite) widget class that simply lets its children set whatever geometry they want. It's amazing that Xt doesn't provide this standardly, but on second thought, it makes sense, considering how amazingly broken Xt is. @example EmacsShell-sub.c EmacsShell.c EmacsShell.h EmacsShellP.h @end example These modules implement two Xt widget classes that are subclasses of the TopLevelShell and TransientShell classes. This is necessary to deal with more brokenness that Xt has sadistically thrust onto the backs of developers. @example xgccache.c xgccache.h @end example These modules provide functions for maintenance and caching of GC's (graphics contexts) under the X Window System. This code is junky and needs to be rewritten. @example select-msw.c select-x.c select.c select.h @end example @cindex selections This module provides an interface to the X Window System's concept of @dfn{selections}, the standard way for X applications to communicate with each other. @example xintrinsic.h xintrinsicp.h xmmanagerp.h xmprimitivep.h @end example These header files are similar in spirit to the @file{sys*.h} files and buffer against different implementations of Xt and Motif. @itemize @bullet @item @file{xintrinsic.h} should be included in place of @file{<Intrinsic.h>}. @item @file{xintrinsicp.h} should be included in place of @file{<IntrinsicP.h>}. @item @file{xmmanagerp.h} should be included in place of @file{<XmManagerP.h>}. @item @file{xmprimitivep.h} should be included in place of @file{<XmPrimitiveP.h>}. @end itemize @example xmu.c xmu.h @end example These files provide an emulation of the Xmu library for those systems (i.e. HPUX) that don't provide it as a standard part of X. @example ExternalClient-Xlib.c ExternalClient.c ExternalClient.h ExternalClientP.h ExternalShell.c ExternalShell.h ExternalShellP.h extw-Xlib.c extw-Xlib.h extw-Xt.c extw-Xt.h @end example @cindex external widget These files provide the @dfn{external widget} interface, which allows an XEmacs frame to appear as a widget in another application. To do this, you have to configure with @samp{--external-widget}. @file{ExternalShell*} provides the server (XEmacs) side of the connection. @file{ExternalClient*} provides the client (other application) side of the connection. These files are not compiled into XEmacs but are compiled into libraries that are then linked into your application. @file{extw-*} is common code that is used for both the client and server. Don't touch this code; something is liable to break if you do. @node Dumping, Future Work, Interface to the X Window System, Top @chapter Dumping @cindex dumping @menu * Dumping Justification:: * Overview:: * Data descriptions:: * Dumping phase:: * Reloading phase:: * Remaining issues:: @end menu @node Dumping Justification, Overview, Dumping, Dumping @section Dumping Justification @cindex dumping, justification The C code of XEmacs is just a Lisp engine with a lot of built-in primitives useful for writing an editor. The editor itself is written mostly in Lisp, and represents around 100K lines of code. Loading and executing the initialization of all this code takes a bit a time (five to ten times the usual startup time of current xemacs) and requires having all the lisp source files around. Having to reload them each time the editor is started would not be acceptable. The traditional solution to this problem is called dumping: the build process first creates the lisp engine under the name @file{temacs}, then runs it until it has finished loading and initializing all the lisp code, and eventually creates a new executable called @file{xemacs} including both the object code in @file{temacs} and all the contents of the memory after the initialization. This solution, while working, has a huge problem: the creation of the new executable from the actual contents of memory is an extremely system-specific process, quite error-prone, and which interferes with a lot of system libraries (like malloc). It is even getting worse nowadays with libraries using constructors which are automatically called when the program is started (even before @code{main()}) which tend to crash when they are called multiple times, once before dumping and once after (IRIX 6.x @file{libz.so} pulls in some C++ image libraries thru dependencies which have this problem). Writing the dumper is also one of the most difficult parts of porting XEmacs to a new operating system. Basically, `dumping' is an operation that is just not officially supported on many operating systems. The aim of the portable dumper is to solve the same problem as the system-specific dumper, that is to be able to reload quickly, using only a small number of files, the fully initialized lisp part of the editor, without any system-specific hacks. @node Overview, Data descriptions, Dumping Justification, Dumping @section Overview @cindex dumping overview The portable dumping system has to: @enumerate @item At dump time, write all initialized, non-quickly-rebuildable data to a file [Note: currently named @file{xemacs.dmp}, but the name will change], along with all information needed for the reloading. @item When starting xemacs, reload the dump file, relocate it to its new starting address if needed, and reinitialize all pointers to this data. Also, rebuild all the quickly rebuildable data. @end enumerate Note: As of 21.5.18, the dump file has been moved inside of the executable, although there are still problems with this on some systems. @node Data descriptions, Dumping phase, Overview, Dumping @section Data descriptions @cindex dumping data descriptions The more complex task of the dumper is to be able to write memory blocks on the heap (lisp objects, i.e. lrecords, and C-allocated memory, such as structs and arrays) to disk and reload them at a different address, updating all the pointers they include in the process. This is done by using external data descriptions that give information about the layout of the blocks in memory. The specification of these descriptions is in lrecord.h. A description of an lrecord is an array of struct memory_description. Each of these structs include a type, an offset in the block and some optional parameters depending on the type. For instance, here is the string description: @example static const struct memory_description string_description[] = @{ @{ XD_BYTECOUNT, offsetof (Lisp_String, size) @}, @{ XD_OPAQUE_DATA_PTR, offsetof (Lisp_String, data), XD_INDIRECT(0, 1) @}, @{ XD_LISP_OBJECT, offsetof (Lisp_String, plist) @}, @{ XD_END @} @}; @end example The first line indicates a member of type Bytecount, which is used by the next, indirect directive. The second means "there is a pointer to some opaque data in the field @code{data}". The length of said data is given by the expression @code{XD_INDIRECT(0, 1)}, which means "the value in the 0th line of the description (welcome to C) plus one". The third line means "there is a Lisp_Object member @code{plist} in the Lisp_String structure". @code{XD_END} then ends the description. This gives us all the information we need to move around what is pointed to by a memory block (C or lrecord) and, by transitivity, everything that it points to. The only missing information for dumping is the size of the block. For lrecords, this is part of the lrecord_implementation, so we don't need to duplicate it. For C blocks we use a struct sized_memory_description, which includes a size field and a pointer to an associated array of memory_description. @node Dumping phase, Reloading phase, Data descriptions, Dumping @section Dumping phase @cindex dumping phase Dumping is done by calling the function @code{pdump()} (in @file{dumper.c}) which is invoked from Fdump_emacs (in @file{emacs.c}). This function performs a number of tasks. @menu * Object inventory:: * Address allocation:: * The header:: * Data dumping:: * Pointers dumping:: @end menu @node Object inventory, Address allocation, Dumping phase, Dumping phase @subsection Object inventory @cindex dumping object inventory @cindex memory blocks The first task is to build the list of the objects to dump. This includes: @itemize @bullet @item lisp objects @item other memory blocks (C structures, arrays. etc) @end itemize We end up with one @code{pdump_block_list_elt} per object group (arrays of C structs are kept together) which includes a pointer to the first object of the group, the per-object size and the count of objects in the group, along with some other information which is initialized later. These entries are linked together in @code{pdump_block_list} structures and can be enumerated thru either: @enumerate @item the @code{pdump_object_table}, an array of @code{pdump_block_list}, one per lrecord type, indexed by type number. @item the @code{pdump_opaque_data_list}, used for the opaque data which does not include pointers, and hence does not need descriptions. @item the @code{pdump_desc_table}, which is a vector of @code{memory_description}/@code{pdump_block_list} pairs, used for non-opaque C memory blocks. @end enumerate This uses a marking strategy similar to the garbage collector. Some differences though: @enumerate @item We do not use the mark bit (which does not exist for generic memory blocks anyway); we use a big hash table instead. @item We do not use the mark function of lrecords but instead rely on the external descriptions. This happens essentially because we need to follow pointers to generic memory blocks and opaque data in addition to Lisp_Object members. @end enumerate This is done by @code{pdump_register_object()}, which handles Lisp_Object variables, and @code{pdump_register_block()} which handles generic memory blocks (C structures, arrays, etc.), which both delegate the description management to @code{pdump_register_sub()}. The hash table doubles as a map object to pdump_block_list_elmt (i.e. allows us to look up a pdump_block_list_elmt with the object it points to). Entries are added with @code{pdump_add_block()} and looked up with @code{pdump_get_block()}. There is no need for entry removal. The hash value is computed quite simply from the object pointer by @code{pdump_make_hash()}. The roots for the marking are: @enumerate @item the @code{staticpro}'ed variables (there is a special @code{staticpro_nodump()} call for protected variables we do not want to dump). @item the Lisp_Object variables registered via @code{dump_add_root_lisp_object} (@code{staticpro()} is equivalent to @code{staticpro_nodump()} + @code{dump_add_root_lisp_object()}). @item the data-segment memory blocks registered via @code{dump_add_root_block} (for blocks with relocatable pointers), or @code{dump_add_opaque} (for "opaque" blocks with no relocatable pointers; this is just a shortcut for calling @code{dump_add_root_block} with a NULL description). @item the pointer variables registered via @code{dump_add_root_block_ptr}, each of which points to a block of heap memory (generally a C structure or array). Note that @code{dump_add_root_block_ptr} is not technically necessary, as a pointer variable can be seen as a special case of a data-segment memory block and registered using @code{dump_add_root_block}. Doing it this way, however, would require another level of static structures declared. Since pointer variables are quite common, @code{dump_add_root_block_ptr} is provided for convenience. Note also that internally we have to treat it separately from @code{dump_add_root_block} rather than writing the former as a call to the latter, since we don't have support for creating and using memory descriptions on the fly -- they must all be statically declared in the data-segment. @end enumerate This does not include the GCPRO'ed variables, the specbinds, the catchtags, the backlist, the redisplay or the profiling info, since we do not want to rebuild the actual chain of lisp calls which end up to the dump-emacs call, only the global variables. Weak lists and weak hash tables are dumped as if they were their non-weak equivalent (without changing their type, of course). This has not yet been a problem. @node Address allocation, The header, Object inventory, Dumping phase @subsection Address allocation @cindex dumping address allocation The next step is to allocate the offsets of each of the objects in the final dump file. This is done by @code{pdump_allocate_offset()} which is called indirectly by @code{pdump_scan_by_alignment()}. The strategy to deal with alignment problems uses these facts: @enumerate @item real world alignment requirements are powers of two. @item the C compiler is required to adjust the size of a struct so that you can have an array of them next to each other. This means you can have an upper bound of the alignment requirements of a given structure by looking at which power of two its size is a multiple. @item the non-variant part of variable size lrecords has an alignment requirement of 4. @end enumerate Hence, for each lrecord type, C struct type or opaque data block the alignment requirement is computed as a power of two, with a minimum of 2^2 for lrecords. @code{pdump_scan_by_alignment()} then scans all the @code{pdump_block_list_elmt}'s, the ones with the highest requirements first. This ensures the best packing. The maximum alignment requirement we take into account is 2^8. @code{pdump_allocate_offset()} only has to do a linear allocation, starting at offset 256 (this leaves room for the header and keeps the alignments happy). @node The header, Data dumping, Address allocation, Dumping phase @subsection The header @cindex dumping, the header The next step creates the file and writes a header with a signature and some random information in it. The @code{reloc_address} field, which indicates at which address the file should be loaded if we want to avoid post-reload relocation, is set to 0. It then seeks to offset 256 (base offset for the objects). @node Data dumping, Pointers dumping, The header, Dumping phase @subsection Data dumping @cindex data dumping @cindex dumping, data The data is dumped in the same order as the addresses were allocated by @code{pdump_dump_data()}, called from @code{pdump_scan_by_alignment()}. This function copies the data to a temporary buffer, relocates all pointers in the object to the addresses allocated in step Address Allocation, and writes it to the file. Using the same order means that, if we are careful with lrecords whose size is not a multiple of 4, we are ensured that the object is always written at the offset in the file allocated in step Address Allocation. @node Pointers dumping, , Data dumping, Dumping phase @subsection Pointers dumping @cindex pointers dumping @cindex dumping, pointers A bunch of tables needed to reassign properly the global pointers are then written. They are: @enumerate @item the pdump_root_block_ptrs dynarr @item the pdump_opaques dynarr @item a vector of all the offsets to the objects in the file that include a description (for faster relocation at reload time) @item the pdump_root_objects and pdump_weak_object_chains dynarrs. @end enumerate For each of the dynarrs we write both the pointer to the variables and the relocated offset of the object they point to. Since these variables are global, the pointers are still valid when restarting the program and are used to regenerate the global pointers. The @code{pdump_weak_object_chains} dynarr is a special case. The variables it points to are the head of weak linked lists of lisp objects of the same type. Not all objects of this list are dumped so the relocated pointer we associate with them points to the first dumped object of the list, or Qnil if none is available. This is also the reason why they are not used as roots for the purpose of object enumeration. Some very important information like the @code{staticpros} and @code{lrecord_implementations_table} are handled indirectly using @code{dump_add_opaque} or @code{dump_add_root_block_ptr}. This is the end of the dumping part. @node Reloading phase, Remaining issues, Dumping phase, Dumping @section Reloading phase @cindex reloading phase @cindex dumping, reloading phase @subsection File loading @cindex dumping, file loading The file is mmap'ed in memory (which ensures a PAGESIZE alignment, at least 4096), or if mmap is unavailable or fails, a 256-bytes aligned malloc is done and the file is loaded. Some variables are reinitialized from the values found in the header. The difference between the actual loading address and the reloc_address is computed and will be used for all the relocations. @subsection Putting back the pdump_opaques @cindex dumping, putting back the pdump_opaques The memory contents are restored in the obvious and trivial way. @subsection Putting back the pdump_root_block_ptrs @cindex dumping, putting back the pdump_root_block_ptrs The variables pointed to by pdump_root_block_ptrs in the dump phase are reset to the right relocated object addresses. @subsection Object relocation @cindex dumping, object relocation All the objects are relocated using their description and their offset by @code{pdump_reloc_one}. This step is unnecessary if the reloc_address is equal to the file loading address. @subsection Putting back the pdump_root_objects and pdump_weak_object_chains @cindex dumping, putting back the pdump_root_objects and pdump_weak_object_chains Same as Putting back the pdump_root_block_ptrs. @subsection Reorganize the hash tables @cindex dumping, reorganize the hash tables Since some of the hash values in the lisp hash tables are address-dependent, their layout is now wrong. So we go through each of them and have them resorted by calling @code{pdump_reorganize_hash_table}. @node Remaining issues, , Reloading phase, Dumping @section Remaining issues @cindex dumping, remaining issues The build process will have to start a post-dump xemacs, ask it the loading address (which will, hopefully, be always the same between different xemacs invocations) [[unfortunately, not true on Linux with the ExecShield feature]] and relocate the file to the new address. This way the object relocation phase will not have to be done, which means no writes in the objects and that, because of the use of mmap, the dumped data will be shared between all the xemacs running on the computer. Some executable signature will be necessary to ensure that a given dump file is really associated with a given executable, or random crashes will occur. Maybe a random number set at compile or configure time thru a define. This will also allow for having differently-compiled xemacsen on the same system (mule and no-mule comes to mind). The DOC file contents should probably end up in the dump file. @node Future Work, Future Work Discussion, Dumping, Top @chapter Future Work @cindex future work @menu * Future Work -- General Suggestions:: * Future Work -- Elisp Compatibility Package:: * Future Work -- Drag-n-Drop:: * Future Work -- Standard Interface for Enabling Extensions:: * Future Work -- Better Initialization File Scheme:: * Future Work -- Keyword Parameters:: * Future Work -- Property Interface Changes:: * Future Work -- Toolbars:: * Future Work -- Menu API Changes:: * Future Work -- Removal of Misc-User Event Type:: * Future Work -- Mouse Pointer:: * Future Work -- Extents:: * Future Work -- Version Number and Development Tree Organization:: * Future Work -- Improvements to the @code{xemacs.org} Website:: * Future Work -- Keybindings:: * Future Work -- Byte Code Snippets:: * Future Work -- Lisp Stream API:: * Future Work -- Multiple Values:: * Future Work -- Macros:: * Future Work -- Specifiers:: * Future Work -- Display Tables:: * Future Work -- Making Elisp Function Calls Faster:: * Future Work -- Lisp Engine Replacement:: @end menu @node Future Work -- General Suggestions, Future Work -- Elisp Compatibility Package, Future Work, Future Work @section Future Work -- General Suggestions @cindex future work, general suggestions @cindex general suggestions, future work @subheading Jamie Zawinski's XEmacs Wishlist This document is based on Jamie Zawinski's @uref{http://www.jwz.org/doc/xemacs-wishlist.html,xemacs wishlist}. Throughout this page, ``I'' refers to Jamie. The list has been substantially reformatted and edited to fit the needs of this site. If you have any soul at all, you'll go check out the original. OK? You should also check out some other @uref{http://www.xemacs.org/Releases/Public-21.2/execution.html#wishlists,wishlists}. @subsubheading About the List I've ranked these (roughly) from easiest to hardest; though of all of them, I think the debugger improvements would be the most useful. I think the combination of emacs+gdb is the best Unix development environment currently available, but it's still lamentably primitive and extremely frustrating (much like Unix itself), especially if you know what kinds of features more modern integrated debuggers have. @subsubheading XEmacs Wishlist @table @strong @item Improve the keyboard macro system. Keyboard macros are one of the most useful concepts that emacs has to offer, but there's room for improvement. @table @strong @item Make it possible to embed one macro inside of another. Often, I'll define a keyboard macro, and then realize that I've left something out, or that there's more that I need to do; for example, I may define a macro that does something to the current line, and then realize that I want to apply it to a lot of lines. So, I'd like this to work: @example @kbd{C-x ( } ; start macro #1 @kbd{... } ; (do stuff) @kbd{C-x ) } ; done with macro #1 @kbd{... } ; (do stuff) @kbd{C-x ( } ; start macro #2 @kbd{C-x e } ; execute macro #1 (splice it into macro #2) @kbd{C-s foo } ; move forward to the next spot @kbd{C-x ) } ; done with macro #2 @kbd{C-u 1000 C-x e } ; apply the new macro @end example That is, simply, one should be able to wrap new text around an existing macro. I can't tell you how many times I've defined a complex macro but left out the ``@kbd{C-n C-a}'' at the end... Yes, you can accomplish this with M-x name-last-kbd-macro, but that's a pain. And it's also more permanent than I'd often like. @item Make it possible to correct errors when defining a macro. Right now, the act of defining a macro stops if you get an error while defining it, and all of the characters you've already typed into the macro are gone. It needn't be that way. I think that, when that first error occurs, the user should be given the option of taking the last command off of the macro and trying again. The macro-reader knows where the bounds of multi-character command sequences are, and it could even keep track of the corresponding undo records; rubbing out the previous entry on the macro could also undo any changes that command had made. (This should also work if the macro spans multiple buffers, and should restore window configurations as well.) You'd want multi-level undo for this as well, so maybe the way to go would be to add some new key sequence which was used only as the back-up-inside-a-keyboard-macro-definition command. I'm not totally sure that this would end up being very usable; maybe it would be too hard to deal with. Which brings us to: @item Make it possible to edit a keyboard macro after it has been defined. I only just discovered @code{edit-kbd-macro} (@kbd{C-x C-k}). It is very, very cool. The trick it does of showing the command which will be executed is somewhat error-prone, as it can only look up things in the current map or the global map; if the macro changed buffers, it wouldn't be displaying the right commands. (One of the things I often use macros for is operating on many files at once, by bringing up a dired buffer of those files, editing them, and then moving on to the next.) However, if the act of recording a macro also kept track of the actual commands that had gotten executed, it could make use of that info as well. Another way of editing a macro, other than as text in a buffer, would be to have a command which single-steps a macro: you would lean on the space bar to watch the macro execute one character (command?) at a time, and then when you reached the point you wanted to change, you could do some gesture to either: insert some keystrokes into the middle of the macro and then continue; or to replace the rest of the macro from here to the end; or something. Another similar hack might be to convert a macro to the equivalent lisp code, so that one could tweak it later in ways that would be too hard to do from the keyboard (wrapping parts of it in @code{while} loops or something.) (@kbd{M-x insert-kbd-macro} isn't really what I'm talking about here: I mean insert the list of commands, not the list of keystrokes.) @end table @item Save my wrists! In the spirit of the `@code{teach-extended-commands-p}' variable, it would be interesting if emacs would keep track of what are the commands I use most often, perhaps grouped by proximity or mode -- it would then be more obvious which commands were most likely candidates for placement on a toolbar, or popup menu, or just a more convenient key binding. Bonus points if it figures out that I type ``@kbd{bt\n}'' and ``@kbd{ret\ny\n}'' into my @samp{*gdb*} buffer about a hundred thousand times a day. @item XmCreateFileSelectionBox The thing that ``File/Open...'' pops up has excellent @emph{hack} value, but as a user interface, it's an abomination. Isn't it time someone added a real file selection dialog already? (For the Motifly-challenged, the Athena-based file selector that GhostView uses seems adequate.) @item Improve the toolbar system. It's great that XEmacs has a toolbar, but it's damn near impossible to customize it. @table @strong @item Make it easy to define new toolbar buttons. Currently, to define a toolbar button that has a text equivalent, one must edit a pixmap, and put the text there! That's prohibitive. One should be able to add some kind of generic toolbar button, with a plain icon or none at all, but which has a text label, without having to use a paint program. @item Make it easy to have customized, mode-local toolbars. In my @code{c-mode-hook}, for example, I can add a couple of new keybindings, and delete a few others, and to do that, I don't have to duplicate the entire definition of the @code{c-mode-map}. Making mode-local additions and subtractions to the toolbars should be as easy. @item Make it easy to have customized, mode-local popup menus. The same situation holds for the right-mouse-button popup menu; one should be able to add new commands to those menus without difficulty. One problem is that each mode which does have a popup menu implements it in a different way... @end table @item Make the External Widget work. About half of the work is done to make a replacement for the @code{XmText} widget which offloads editing responsibility to an external Emacs process. Someone should finish that. The benefit here would be that then, any Motif program could be linked such that all editing happened with a real Emacs behind it. (If you're Athena-minded, flavor with @code{Text} instead of @code{XmText} -- it's probably easy to make it work with both.) The part of this that is done already is the ability to run an Emacs screen on a Window object that has been created by another process (this is what the @file{ExternalClient.c} and @file{ExternalShell.c} stuff is.) What is left to be done is, adding the text-widget-editor aspects of this. First, the emacs screen being displayed on that window would have to be one without a modeline, and one which behaved sensibly in the context of ``I am a small multi-line text area embedded in a dialog box'' as opposed to ``I am a full-on text editor and lord of all that I survey.'' Second, the API that the (non-emacs-aware) user of the @code{XmText} widget expects would need to be implemented: give the caller the ability to pull the edited text string back out, and so on. The idea here being, hooking up emacs as the widget editor should be as transparent as possible. @item Bring the debugger interface into the eighties. Some of you may have seen my @file{gdb-highlight.el} package, that I posted to gnu.emacs.sources last month. I think it's really cool, but there should be a lot more work in that direction. For those of you who haven't seen it, what it does is watch text that gets inserted into the @samp{*gdb*} buffer and make very nearly everything be clickable and have a context-sensitive menu. Generally, the types that are noticed are: @itemize @item function names; @item variable and parameter names; @item structure slots; @item source file names; @item type names; @item breakpoint numbers; @item stack frame numbers. @end itemize Any time one of those objects is presented in the @samp{*gdb*} buffer, it is mousable. Clicking middle button on it takes some default action (edits the function, selects the stack frame, disables the breakpoint, ...) Clicking the right button pops up a menu of commands, including commands specific to the object under the mouse, and/or other objects on the same line. So that's all well and good, and I get far more joy out of what this code does for me than I expected, but there are still a bunch of limitations. The debugger interface needs to do much, much more. @table @strong @item Make gdbsrc-mode not suck. The idea behind @code{gdbsrc-mode} is on the side of the angels: one should be able to focus on the source code and not on the debugger buffer, absolutely. But the implementation is just awful. First and foremost, it should not change ``modes'' (in the more general sense). Any commands that it defines should be on keys which are exclusively used for that purpose, not keys which are normally self-inserting. I can't be the only person who usually has occasion to actually @emph{edit} the sources which the debugger has chosen to display! Switching into and out of @code{gdbsrc-mode} is prohibitive. I want to be looking at my sources at all times, yet I don't want to have to give up my source-editing gestures. I think the right way to accomplish this is to put the gdbsrc commands on the toolbar and on popup menus; or to let the user define their own keys (I could see devoting my @key{kp_enter} key to ``step'', or something common like that.) Also it's extremely frustrating that one can't turn off gdbsrc mode once it has been loaded, without exiting and restarting emacs; that alone means that I'd probably never take the time to learn how to use it, without first having taken the time to repair it... @item Make it easier access to variable values. I want to be able to double-click on a variable name to highlight it, and then drag it to the debugger window to have its value printed. I want gestures that let me write as well as read: for example, to store value A into slot B. @item Make all breakpoints visible. Any time there is a running gdb which has breakpoints, the buffers holding the lines on which those breakpoints are set should have icons in them. These icons should be context-sensitive: I should be able to pop up a menu to enable or disable them, to delete them, to change their commands or conditions. I should also be able to @emph{move} them. It's annoying when you have a breakpoint with a complex condition or command on it, and then you realize that you really want it to be at a different location. I want to be able to drag-and-drop the icon to its new home. @item Make a debugger status display window. @itemize @item I want a window off to the side that shows persistent information -- it should have a pane which is a drag-editable, drag-reorderable representation of the elements on gdb's ``display'' list; they should be displayed here instead of being just dumped in with the rest of the output in the @samp{*gdb*} buffer. @item I want a pane that displays the current call-stack and nothing else. I want a pane that displays the arguments and locals of the currently-selected frame and nothing else. I want these both to update as I move around on the stack. @item Since the unfortunate reality is that excavating this information from gdb can be slow, it would be a good idea for these panes to have a toggle button on them which meant ``stop updating'', so that when I want to move fast, I can, but I can easily get the display back when I need it again. @end itemize The reason for all of this is that I spend entirely too much time scrolling around in the @samp{*gdb*} buffer; with gdb-highlight, I can just click on a line in the backtrace output to go to that frame, but I find that I spend a lot of time @emph{looking} for that backtrace: since it's mixed in with all the other random output, I waste time looking around for things (and usually just give up and type ``@kbd{bt}'' again, then thrash around as the buffer scrolls, and I try to find the lower frames that I'm interested in, as they have invariably scrolled off the window already... @item Save and restore breakpoints across emacs/debugger sessions. This would be especially handy given that gdb leaks like a sieve, and with a big program, I only get a few dozen relink-and-rerun attempts before gdb has blown my swap space. @item Keep breakpoints in sync with source lines. When a program is recompiled and then reloaded into gdb, the breakpoints often end up in less-than-useful places. For example, when I edit text which occurs in a file anywhere before a breakpoint, emacs is aware that the line of the bp hasn't changed, but just that it is in a different place relative to the top of the file. Gdb doesn't know this, so your breakpoints end up getting set in the wrong places (usually the maximally inconvenient places, like @emph{after} a loop instead of @emph{inside} it). But emacs knows, so emacs should inform the debugger, and move the breakpoints back to the places they were intended to be. @end table (Possibly the OOBR stuff does some of this, but can't tell, because I've never been able to get it to do anything but beep at me and mumble about environments. I find it pretty funny that the manual keeps explaining to me how intuitive it is, without actually giving me a clue how to launch it...) @item Add better dialog box features. It'd be nice to be able to create more complex dialog boxes from emacs-lisp: ones with checkboxes, radio button groups, text fields, and popup menus. @item Add embeddable dialog boxes. One of the things that the now-defunct Energize code (the C side of it, that is) could do was embed a dialog box between the toolbar and the main text area -- buffers could have control panels associated with them, that had all kinds of complex behavior. @item Make the mark-stack be visible. You know, I've encountered people who have been using emacs for years, and never use the mark stack for navigation. I can't live without it; ``@kbd{C-u C-SPC}'' is among my most common gestures. @enumerate @item It would be a lot easier to realize what's going to happen if the marks on the mark stack were visible. They could be displayed as small ``caret'' glyphs, for example; something large enough to be visible, but not easily mistaken for a character or for the cursor. @item The marks and the selected region should be visible in the scrollbar as well -- I don't remember where I first saw this idea, but it's very cool: there's a second, less-strongly-rendered ``thumb'' in the scrollbar which indicates the position and size of the selection; and there are tiny tick-marks which indicate the positions of the saved points. @item Markers which are in registers (@code{point-to-register}, @kbd{C-x /}) should be displayed differently (more prominent.) @item It'd be cool if you could pick up markers and move them around, to adjust the points you'll be coming back to later. @end enumerate @item Write a new garbage collector. The emacs GC is very primitive; it is also, fortunately, a rather well isolated module, and it would not be a very big task to swap it with a new one (once that new one was written, that is.) Someone should go bone up on modern GC techniques, and then just dive right in... @item Add support for lexical scope to the emacs-lisp runtime. Yadda yadda, this list goes to eleven. @end table @* Subject: @strong{Re: XEmacs wishlist} Date: Wed, 14 May 1997 16:18:23 -0700 From: Jamie Zawinski <jwz@@netscape.com> Newsgroups: comp.emacs.xemacs, comp.emacs Andreas Schwab wrote: @quotation @emph{Use `C-u C-x (': } @emph{start-kbd-macro:@*Non-nil arg (prefix arg) means append to last macro defined; This begins by re-executing that macro as if you typed it again. } @end quotation Cool, I didn't know it did that... But it only lets you append. I often want to prepend, or embed the macro multiple times (motion 1, C-x e, motion 2, C-x e, motion 3.) @subheading 21.2 Showstoppers Author: @uref{mailto:ben@@xemacs.org,Ben Wing} DISTRIBUTION ISSUES A. Unified Source Tarball. Packages go under root/lib/xemacs/xemacs-packages and no one ever has to mess with --package-path and the result can be moved from one directory to another pre- or post-install. Unified Binary Tarballs with Packages. Same principles as above. If people complain, we can also provide split binary tarballs (architecture dependent and independent) and place these files in a subdirectory so as not to confuse the majority just looking for one tarball. Under Windows, we need to provide a WISE-style GUI setup program. It's already there but needs some work so you can select "all" packages easily (should be the default). Parallel Root and Package Trees. If the user downloads separately, the main source and the packages, he will naturally untar them into the same directory. This results in the parallel root and package structure. We should support this as a "last resort," i.e., if we find no packages anywhere and are about to resign ourselves to not having packages, then look for a parallel package tree. The user who sets things up like this should be able to either run in place or "make install" and get a proper installed XEmacs. Never should the user have to touch --package-path. II. WINDOWS PRINTING Looks like the internals are done but not the GUI. This must be working in 21.2. III. WINDOWS MULE Basic support should be there. There's already a patch to get things started and I'll be doing more work to make this real. IV. GUTTER ETC. This stuff needs to be "stable" and generally free from bugs. Any API's we create need to be well-reviewed or marked clearly as experimental. V. PORTABLE DUMPER Last bits need to be cleaned up. This should be made the "default" for a while to flush-out problems. Under Microsoft Windows, Portable Dumper must be the default in 21.2 because of the problems with the existing dump process. COMMENT: I'd like to feature freeze this pretty soon and create a 21.3 tree where all of my major overhauls of Mule-related stuff will go in. At the same time or around, we need to do the move-around in the repository (or create a new one) and "upgrade" to the latest CVS server. @node Future Work -- Elisp Compatibility Package, Future Work -- Drag-n-Drop, Future Work -- General Suggestions, Future Work @section Future Work -- Elisp Compatibility Package @cindex future work, elisp compatibility package @cindex elisp compatibility package, future work Author: @uref{mailto:ben@@xemacs.org,Ben Wing} A while ago I created a package called Sysdep, which aimed to be a forward compatibility package for Elisp. The idea was that instead of having to write your package using the oldest version of Emacs that you wanted to support, you could use the newest XEmacs API, and then simply load the Sysdep package, which would automatically define the new API in terms of older APIs as necessary. The idea of this package was good, but its design wasn't perfect, and it wasn't widely adopted. I propose a new package called Compat that corrects the design flaws in Sysdep, and hopefully will be adopted by most of the major packages. In addition, this package will provide macros that can be used to bracket code as necessary to disable byte compiler warnings generated as a result of supporting the APIs of different versions of Emacs; or rather the Compat package strives to provide useful constructs to make doing this support easier, and these constructs have the side effect of not causing spurious byte compiler warnings. The idea here is that it should be possible to create well-written, clean, and understandable Elisp that supports both older and newer APIs, and has no byte compiler warnings. Currently many warnings are unavoidable, and as a result, they are simply ignored, which also causes a lot of legitimate warnings to be ignored. The approach taken by the Sysdep package to make sure that the newest API was always supported was fairly simple: when the Sysdep package was loaded, it checked for the existence of new API functions, and if they weren't defined, it defined them in terms of older API functions that were defined. This had the advantage that the checks for which API functions were defined were done only once at load time rather than each time the function was called. However, the fact that the new APIs were globally defined caused a lot of problems with unwanted interactions, both with other versions of the Sysdep package provided as part of other packages, and simply with compatibility code of other sorts in packages that would determine whether an API existed by checking for the existence of certain functions within that API. In addition, the Sysdep package did not scale well because it defined all of the functions that it supported, regardless of whether or not they were used. The Compat package remedies the first problem by ensuring that the new APIs are defined only within the lexical scope of the packages that actually make use of the Compat package. It remedies the second problem by ensuring that only definitions of functions that are actually used are loaded. This all works roughly according to the following scheme: @enumerate @item Part of the Compat package is a module called the Compat generator. This module is actually run as an additional step during byte compilation of a package that uses Compat. This can happen either through the makefile or through the use of an @code{eval-when-compile} call within the package code itself. What the generator does is scan all of the Lisp code in the package, determine which function calls are made that the Compat package knows about, and generates custom @code{compat} code that conditionally defines just these functions when the package is loaded. The custom @code{compat} code can either be written to a separate Lisp file (for use with multi-file packages), or inserted into the beginning of the Lisp file of a single file package. (In the latter case, the package indicates where this generated code should go through the use of magic comments that mark the beginning and end of the section. Some will say that doing this trick is bad juju, but I have done this sort of thing before, and it works very well in practice). @item The functions in the custom @code{compat} code have their names prefixed with both the name of the package and the word @code{compat}, ensuring that there will be no name space conflicts with other functions in the same package, or with other packages that make use of the Compat package. @item The actual definitions of the functions in the custom @code{compat} code are determined at run time. When the equivalent API already exists, the wrapper functions are simply defined directly in terms of the actual functions, so that the only run time overhead from using the Compat package is one additional function call. (Alternatively, even this small overhead could be avoided by retrieving the definitions of the actual functions and supplying them as the definitions of the wrapper functions. However, this appears to me to not be completely safe. For example, it might have bad interactions with the advice package). @item The code that wants to make use of the custom @code{compat} code is bracketed by a call to the construct @code{compat-execute}. What this actually does is lexically bind all of the function names that are being redefined with macro functions by using the Common Lisp macro macrolet. (The definition of this macro is in the CL package, but in order for things to work on all platforms, the definition of this macro will presumably have to be copied and inserted into the custom @code{compat} code). @end enumerate In addition, the Compat package should define the macro @code{compat-if-fboundp}. Similar macros such as @code{compile-when-fboundp} and @code{compile-case-fboundp} could be defined using similar principles). The @code{compat-if-fboundp} macro behaves just like an @code{(if (fboundp ...) ...)} clause when executed, but in addition, when it's compiled, it ensures that the code inside the @code{if-true} sub-block will not cause any byte compiler warnings about the function in question being unbound. I think that the way to implement this would be to make @code{compat-if-fboundp} be a macro that does what it's supposed to do, but which defines its own byte code handler, which ensures that the particular warning in question will be suppressed. (Actually ensuring that just the warning in question is suppressed, and not any others, might be rather tricky. It certainly requires further thought). Note: An alternative way of avoiding both warnings about unbound functions and warnings about obsolete functions is to just call the function in question by using @code{funcall}, instead of calling the function directly. This seems rather inelegant to me, though, and doesn't make it obvious why the function is being called in such a roundabout manner. Perhaps the Compat package should also provide a macro @code{compat-funcall}, which works exactly like @code{funcall}, but which indicates to anyone reading the code why the code is expressed in such a fashion. If you're wondering how to implement the part of the Compat generator where it scans Lisp code to find function calls for functions that it wants to do something about, I think the best way is to simply process the code using the Lisp function @code{read} and recursively descend any lists looking for function names as the first element of any list encountered. This might extract out a few more functions than are actually called, but it is almost certainly safer than doing anything trickier like byte compiling the code, and attempting to look for function calls in the result. (It could also be argued that the names of the functions should be extracted, not only from the first element of lists, but anywhere @code{symbol} occurs. For example, to catch places where a function is called using @code{funcall} or @code{apply}. However, such uses of functions would not be affected by the surrounding macrolet call, and so there doesn't appear to be any point in extracting them). @node Future Work -- Drag-n-Drop, Future Work -- Standard Interface for Enabling Extensions, Future Work -- Elisp Compatibility Package, Future Work @section Future Work -- Drag-n-Drop @cindex future work, drag-n-drop @cindex drag-n-drop, future work Author: @uref{mailto:ben@@xemacs.org,Ben Wing} @strong{Abstract:} I propose completely redoing the drag-n-drop interface to make it powerful and extensible enough to support such concepts as drag over and drag under visuals and context menus invoked when a drag is done with the right mouse button, to allow drop handlers to be defined for all sorts of graphical elements including buffers, extents, mode lines, toolbar items, menubar items, glyphs, etc., and to allow different packages to add and remove drop handlers for the same drop sites without interfering with each other. The changes are extensive enough that I think they can only be implemented in version 22, and the drag-n-drop interface should remain experimental until then. The new drag-n-drop interface centers around the twin concepts of @dfn{drop site} and @dfn{drop handler}. A @dfn{drop site} specifies a particular graphical element where an object can be dropped onto, and a @dfn{drop handler} encapsulates all of the behavior that happens when such an object is dragged over and dropped onto a drop site. Each drop site has an object associated with it which is passed to functions that are part of the drop handlers associated with that site. The type of this object depends on the graphical element that comprises the drop site. The drop site object can be a buffer, an extent, a glyph, a menu path, a toolbar item path, etc. (These last two object types are defined in @uref{lisp-interface.html,Lisp Interface Changes} in the sections on menu and toolbar API changes. If we wanted to allow drops onto other kinds of drop sites, for example mode lines, we would have to create corresponding path objects). Each such object type should be able to be accessed using the generalized property interface defined above, and should have a property called @code{drop-handlers} associated with it that specifies all of the drop handlers associated with the drop site. Normally, this property is not accessed directly, but instead by using the drop handler API defined below, and Lisp packages should not make any assumptions about the format of the data contained in the @code{drop-handlers} property. Each drop handler has an object of type @code{drop-handler} associated with it, whose primary purpose is to be a container for the various properties associated with a particular drop handler. These could include, for example, a function invoked when the drop occurs, a context menu invoked when a drop occurs as a result of a drag with the right mouse button, functions invoked when a dragged object enters, leaves, or moves within a drop site, the shape that the mouse pointer changes to when an object is dragged over a drop site that allows this particular object to be dropped onto it, the MIME types (actually a regular expression matching the MIME types) of the allowable objects that can be dropped onto the drop site, a @dfn{package tag} (a symbol specifying the package that created the drop handler, used for identification purposes), etc. The drop handler object is passed to the functions that are invoked as a result of a drag or a drop, most likely indirectly as one of the properties of the drag or drop event passed to the function. Properties of a drop handler object are accessed and modified in the standard fashion using the generalized property interface. A drop handler is added to a drop site using the @code{add-drop-handler} function. The drop handler itself can either be created separately using the @code{make-drop-handler} function and then passed in as one of the parameters to @code{add-drop-handler}, or it will be created automatically by the @code{add-drop-handler} function, if the drop handler argument is omitted, but keyword arguments corresponding to the valid keyword properties for a drop handler are specified in the @code{add-drop-handler} call. Other functions, such as @code{find-drop-handler}, @code{add-drop-handler} (when specifying a drop handler before which the drop handler in question is to be added), @code{remove-drop-handler} etc. should be defined with obvious semantics. All of these functions take or return a drop site object which, as mentioned above, can be one of several object types corresponding to graphical elements. Defined drop handler functions locate a particular drop handler using either the @code{MIME-type} or @code{package-tag} property of the drop handler, as defined above. Logically, the drop handlers associated with a particular drop site are an ordered list. The first drop handler whose specified MIME type matches the MIME type of the object being dragged or dropped controls what happens to this object. This is important particularly because the specified MIME type of the drop handler can be a regular expression that, for example, matches all audio objects with any sub-type. In the current drag-n-drop API, there is a distinction made between objects with an associated MIME type and objects with an associated URL. I think that this distinction is arbitrary, and should not exist. All objects should have a MIME type associated with them, and a new XEmacs-specific MIME type should be defined for URLs, file names, etc. as necessary. I am not even sure that this is necessary, however, as the MIME specification may specify a general concept of a pointer or link to an object, which is exactly what we want. Also in some cases (for example, the name of a file that is locally available), the pointer or link will have another MIME type associated with it, which is the type of the object that is being pointed to. I am not quite sure how we should handle URL and file name objects being dragged, but I am positive that it needs to be integrated with the mechanism used when an object itself is being dragged or dropped. As is described in @uref{misc-user-event.html,a separate page}, the @code{misc-user-event} event type should be removed and split up into a number of separate event types. Two such event types would be @code{drag-event} and @code{drop-event}. A drop event is used when an object is actually dropped, and a drag event is used if a function is invoked as part of the dragging process. (Such a function would typically be used to control what are called @dfn{drag under visuals}, which are changes to the appearance of the drop site reflecting the fact that a compatible object is being dragged over it). The drag events and drop events encapsulate all of the information that is pertinent to the drag or drop action occurring, including such information as the actual MIME type of the object in question, the drop handler that caused a function to be invoked, the mouse event (or possibly even a keyboard event) corresponding to the user's action that is causing the drag or drop, etc. This event is always passed to any function that is invoked as a result of the drag or drop. There should never be any need to refer to the @code{current-mouse-event} variable, and in fact, this variable should not be changed at all during a drag or a drop. @node Future Work -- Standard Interface for Enabling Extensions, Future Work -- Better Initialization File Scheme, Future Work -- Drag-n-Drop, Future Work @section Future Work -- Standard Interface for Enabling Extensions @cindex future work, standard interface for enabling extensions @cindex standard interface for enabling extensions, future work Author: @uref{mailto:ben@@xemacs.org,Ben Wing} @strong{Abstract:} Apparently, if you know the name of a package (for example, @code{fusion}), you can load it using the @code{require} function, but there's no standard way to turn it on or turn it off. The only way to figure out how to do that is to go read the source file, where hopefully the comments at the start tell you the appropriate magic incantations that you need to run in order to turn the extension on or off. There really needs to be standard functions, such as @code{enable-extension} and @code{disable-extension}, to do this sort of thing. It seems like a glaring omission that this isn't currently present, and it's really surprising to me that nobody has remarked on this. The easy part of this is defining the interface, and I think it should be done as soon as possible. When the package is loaded, it simply calls some standard function in the package system, and passes it the names of enable and disable functions, or perhaps just one function that takes an argument specifying whether to enable or disable. In any case, this data is kept in a table which is used by the @code{enable-extension} and @code{disable-extension} function. There should also be functions such as @code{extension-enabled-p} and @code{enabled-extension-list}, and so on with obvious semantics. The hard part is actually getting packages to obey this standard interface, but this is mitigated by the fact that the changes needed to support this interface are so simple. I have been conceiving of these enabling and disabling functions as turning the feature on or off globally. It's probably also useful to have a standard interface returning a extension on or off in just the particular buffer. Perhaps then the appropriate interface would involve registering a single function that takes an argument that specifies various things, such as turn off globally, turn on globally, turn on or off in the current buffer, etc. Part of this interface should specify the correct way to define global key bindings. The correct rule for this, of course, is that the key bindings should not happen when the package is loaded, which is often how things are currently done, but only when the extension is actually enabled. The key bindings should go away when the extension is disabled. I think that in order to support this properly, we should expand the keymap interface slightly, so that in addition to other properties associated with each key binding is a list of shadow bindings. Then there should be a function called @code{define-key-shadowing}, which is just like @code{define-key} but which also remembers the previous key binding in a shadow list. Then there can be another function, something like @code{undefine-key}, which restores the binding to the most recently added item on the shadow list. There are already hash tables associated with each key binding, and it should be easy to stuff additional values, such as a shadow list, into the hash table. Probably there should also be functions called @code{global-set-key-shadowing} and @code{global-unset-key-shadowing} with obvious semantics. Once this interface is defined, it should be easy to expand the custom package so it knows about this interface. Then it will be possible to put all sorts of extensions on the options menu so that they could be turned off and turned on very easily, and then when you save the options out to a file, the design settings for whether these extensions are enabled or not are saved out with it. A whole lot of custom junk that's been added to a lot of different packages could be removed. After doing this, we might want to think of a way to classify extensions according to how likely we think the user will want to use them. This way we can avoid the problem of having a list of 100 extensions and the user not being able to figure out which ones might be useful. Perhaps the most useful extensions would appear immediately on the extensions menu, and the less useful ones would appear in a submenu of that, and another submenu might contain even less useful extensions. Of course the package authors might not be too happy with this, but the users probably will be. I think this at least deserves a thought, although it's possible you might simply want to maintain a list on the web site of extensions and a judgment on first of all, how commonly a user might want this extension, and second of all, how well written and bug-free the package is. Both of these sorts of judgments could be obtained by doing user surveys if need be. @node Future Work -- Better Initialization File Scheme, Future Work -- Keyword Parameters, Future Work -- Standard Interface for Enabling Extensions, Future Work @section Future Work -- Better Initialization File Scheme @cindex future work, better initialization file scheme @cindex better initialization file scheme, future work Author: @uref{mailto:ben@@xemacs.org,Ben Wing} @strong{Abstract:} A proposal is outlined for converting XEmacs to use the @code{.xemacs} subdirectory for its initialization files instead of putting them in the user's home directory. In the process, a general pre-initialization scheme is created whereby all of the initialization parameters, such as the location of the initialization files, whether these files are loaded or not, where the initial frame is created, etc. that are currently specified by command line arguments, by environment variables, and other means, can be specified in a uniform way using Lisp code. Reasonable default behavior for everything will still be provided, and the older, simpler means can be used if desired. Compatibility with the current location and name of the initialization file, and the current ill-chosen use for the @code{.xemacs} directory is maintained, and the problem of how to gracefully migrate a user from the old scheme into the new scheme while still allowing the user to use GNU Emacs or older versions of XEmacs is solved. A proposal for changing the way that the initial frame is mapped is also outlined; this would allow the user's initialization file to control the way that the initial frame appears without resorting to hacks, while still making echo area messages visible as they appear, and allowing the user to debug errors in the initialization file. @subheading Principles in the new scheme @enumerate @item XEmacs has a defined @dfn{pre-initialization process}. This process, whose purpose is to compute the values of the parameters that control how the initializiaton process proceeds, occurs as early as possible after the Lisp engine has been initialized, and in particular, it occurs before any devices have been opened, or before any initialization parameters are set that could reasonably be expected to be changed. In fact, the pre-initialization process should take care of setting these parameters. The code that implements the pre-initialization process should be written in Lisp and should be called from the Lisp function @code{normal-top-level}, and the general way that the user customizes this process should also be done using Lisp code. @item The pre-initialization process involves a number of properties, for example the directory containing the user initialization files (normally the @code{.xemacs} subdirectory), the name of the user init file, the name of the custom init file, where and what type the initial device is, whether and when the initial frame is mapped, etc. A standard interface is provided for getting and setting the values of these properties using functions such as @code{set-pre-init-property}, @code{pre-init-property}, etc. At various points during the pre-initialization process, the value of many of these properties can be undecided, which means that at the end of the process, the value of these properties will be derived from other properties in some fashion that is specific to each property. @item The default values of these properties are set first from the registry under Windows, then from environment variables, then from command line switches, such as @code{-q} and @code{-nw}. @item One of the command line switches is @code{-pre-init}, whose value is a Lisp expression to be evaluated at pre-initialization time, similar to the @code{-eval} command line switch. This allows any pre-initialization property to be set from the command line. @item Let's define the term @dfn{to determine a pre-initialization property} to mean if the value of a property is undetermined, it is computed and set according to a rule that is specific to the property. Then after the pre-init properties are initialized from the registry, from the environment variables, from command line arguments, two of the pre-init properties (specifically the init file directory and the location of the @dfn{pre-init file}) are determined. The purpose of the pre-init file is to contain Lisp code that is run at pre-initialization time, and to control how the initialization proceeds. It is a bit similar to the standard init file, but the code in the pre-init file shouldn't do anything other than set pre-init properties. Executing any code that does I/O might not produce expected results because the only device that will exist at the time is probably a stream device connected to the standard I/O of the XEmacs process. @item After the pre-init file has been run, all of the rest of the pre-init properties are determined, and these values are then used to control the initialization process. Some of the rules used in determining specific properties are: @enumerate @item If the @code{.xemacs} sub-directory exists, and it's not obviously a package root (which probably means that it contains a file like @code{init.el} or @code{pre-init.el}, or if neither of those files is present, then it doesn't contain any sub-directories or files that look like what would be in a package root), then it becomes the value of the init file directory. Otherwise the user's home directory is used. @item If the init file directory is the user's home directory, then the init file is called @code{.emacs}. Otherwise, it's called @code{init.el}. @item If the init file directory is the user's home directory, then the pre-init file is called @code{.xemacs-pre-init.el}. Otherwise it's called @code{pre-init.el}. (One of the reasons for this rule has to do with the dialog box that might be displayed at startup. This will be described below.) @item If the init file directory is the user's home directory, then the custom init file is called @code{.xemacs-custom-init.el}. Otherwise, it's called @code{custom-init.el}. @end enumerate @item After the first normal device is created, but before any frames are created on it, the XEmacs initialization code checks to see if the old init file scheme is being used, which is to say that the init file directory is the same as the user's home directory. If that's the case, then normally a dialog box comes up (or a question is asked on the terminal if XEmacs is being run in a non-windowing mode) which asks if the user wants to migrate his initialization files to the new scheme. The possible responses are @strong{Yes}, @strong{No}, and @strong{No, and don't ask this again}. If this last response is chosen, then the file @code{.xemacs-pre-init.el} in the user's home directory is created or appended to with a line of Lisp code that sets up a pre-init property indicating that this dialog box shouldn't come up again. If the @strong{Yes} option is chosen, then any package root files in @code{.xemacs} are moved into @code{.xemacs/packages}, the file @code{.emacs} is moved into @code{.xemacs/init.el} and @code{.emacs} in the home directory becomes a symlink to this file. This way some compatibility is still maintained with GNU Emacs and older versions of XEmacs. The code that implements this has to be written very carefully to make sure that it doesn't accidentally delete or mess up any of the files that get moved around. @end enumerate @subheading The custom init file The @dfn{custom init file} is where the custom package writes its options. This obviously needs to be a separate file from the standard init file. It should also be loaded before the init file rather than after, as is usually done currently, so that the init file can override these options if it wants to. @subheading Frame mapping In addition to the above scheme, the way that XEmacs handles mapping the initial frame should be changed. However, this change perhaps should be delayed to a later version of XEmacs because of the user visible changes that it entails and the possible breakage in people's init files that might occur. (For example, if the rest of the scheme is implemented in 21.2, then this part of the scheme might want to be delayed until version 22.) The basic idea is that the initial frame is not created before the initialization file is run, but instead a banner frame is created containing the XEmacs logo, a button that allows the user to cancel the execution of the init file and an area where messages that are output in the process of running this file are displayed. This area should contain a number of lines, which makes it better than the current scheme where only the last message is visible. After the init file is done, the initial frame is mapped. This way the init file can make face changes and other such modifications that affect initial frame and then have the initial frame correctly come up with these changes and not see any frame dancing or other problems that exist currently. There should be a function that allows the initialization file to explicitly create and map the first frame if it wants to. There should also be a pre-init property that controls whether the banner frame appears (of course it defaults to true) a property controlling when the initial frame is created (before or after the init file, defaulting to after), and a property controlling whether the initial frame is mapped (normally true, but will be false if the @code{-unmapped} command line argument is given). If an error occurs in the init file, then the initial frame should always be created and mapped at that time so that the error is displayed and the debugger has a place to be invoked. @node Future Work -- Keyword Parameters, Future Work -- Property Interface Changes, Future Work -- Better Initialization File Scheme, Future Work @section Future Work -- Keyword Parameters @cindex future work, keyword parameters @cindex keyword parameters, future work Author: @uref{mailto:ben@@xemacs.org,Ben Wing} NOTE: These changes are partly motivated by the various user-interface changes elsewhere in this document, and partly for Mule support. In general the various API's in this document would benefit greatly from built-in keywords. I would like to make keyword parameters an integral part of Elisp. The idea here is that you use the @code{&key} identifier in the parameter list of a function and all of the following parameters specified are keyword parameters. This means that when these arguments are specified in a function call, they are immediately preceded in the argument list by a @dfn{keyword}, which is a symbol beginning with the `:' character. This allows any argument to be specified independently of any other argument with no need to place the arguments in any particular order. This is particularly useful for functions that take many optional parameters; using keyword parameters makes the code much cleaner and easier to understand. The @code{cl} package already provides keyword parameters of a sort, but I would like to make this more integrated and useable in a standard fashion. The interface that I am proposing is essentially compatible with the keyword interface in Common Lisp, but it may be a subset of the Common Lisp functionality, especially in the first implementation. There is one departure from the Common Lisp specification that I would like to make in order to make it much easier to add keyword parameters to existing functions with optional parameters, and in general, to make optional and keyword parameters coexist more easily. The Common Lisp specification indicates that if a function has both optional and keyword parameters, the optional parameters are always processed before the keyword parameters. This means, for example, that if a function has three required parameters, two optional parameters, and some number of keyword parameters following, and the program attempts to call this function by passing in the three required arguments, and then some keyword arguments, the first keyword specified and the argument following it get assigned to the first and second optional parameters as specified in the function definition. This is certainly not what is intended, and means that if a function defines both optional and keyword parameters, any calls of this function must specify @code{nil} for all of the optional arguments before using any keywords. If the function definition is later changed to add more optional parameters, all existing calls to this function that use any keyword arguments will break. This problem goes away if we simply process keyword parameters before the optional parameters. The primary changes needed to support the keyword syntax are: @enumerate @item The subr object type needs to be modified to contain additional slots for the number and names of any keyword parameters. @item The implementation of the @code{funcall} function needs to be modified so that it knows how to process keyword parameters. This is the only place that will require very much intricate coding, and much of the logic that would need to be added can be lifted directly from the @code{cl} code. @item A new macro, similar to the @code{DEFUN} macro, and probably called @code{DEFUN_WITH_KEYWORDS}, needs to be defined so that built-in Lisp primitives containing keywords can be created. Now, the @code{DEFUN_WITH_KEYWORDS} macro should take an additional parameter which is a string, which consists of the part of the lambda list declaration for this primitive that begins with the @code{&key} specifier. This string is parsed in the @code{DEFSUBR} macro during XEmacs initialization, and is converted into the appropriate structure that needs to be stored into the subr object. In addition, the @var{max_args} parameter of the @code{DEFUN} macro needs to be incremented by the number of keyword parameters and these parameters are passed to the C function simply as extra parameters at the end. The @code{DEFSUBR} macro can sort out the actual number of required, optional and keyword parameters that the function takes, once it has parsed the keyword parameter string. (An alternative that might make the declaration of a primitive a little bit easier to understand would involve adding another parameter to the @code{DEFUN_WITH_KEYWORDS} macro that specifies the number of keyword parameters. However, this would require some additional complexity in the preprocessor definition of the @code{DEFUN_WITH_KEYWORDS} macro, and probably isn't worth implementing). @item The byte compiler would have to be modified slightly so that it knows about keyword parameters when it parses the parameter declaration of a function. For example, so that it issues the correct warnings concerning calls to that function with incorrect arguments. @item The @code{make-docfile} program would have to be modified so that it generates the correct parameter lists for primitives defined using the @code{DEFUN_WITH_KEYWORDS} macro. @item Possibly other aspects of the help system that deal with function descriptions might have to be modified. @item A helper function might need to be defined to make it easier for primitives that use both the @code{&rest} and @code{&key} specifiers to parse their argument lists. @end enumerate @subheading Internal API for C primitives with keywords - necessary for many of the new Mule APIs being defined. @example DEFUN_WITH_KEYWORDS (Ffoo, "foo", 2, 5, 6, ALLOW_OTHER_KEYWORDS, (ichi, ARG_NIL), (ni, ARG_NIL), (san, ARG_UNBOUND), 0, (arg1, arg2, arg3, arg4, arg5) ) @{ ... @} -> C fun of 12 args: (arg1, ... arg5, ichi, ..., roku, other keywords) Circled in blue is actual example declaration DEFUN_WITH_KEYWORDS (Ffoo, "foo", 1,2,0 (bar, baz) <- arg list [ MIN ARGS, MAX ARGS, something that could be REST, SPECIFY_DEFAULT or REST_SPEC] [#KEYWORDS [ ALLOW_OTHER, SPECIFY_DEFAULT, ALLOW_OTHER_SPECIFY_DEFAULT 6, ALLOW_OTHER_SPECIFY_DEFAULT, (ichi, 0) (ni, 0), (san, DEFAULT_UNBOUND), (shi, "t"), (go, "5"), (roku, "(current-buffer)") <- specifies arguments, default values (string to be read into Lisp data during init; then forms evalled at fn ref time. ,0 <- [INTERACTIVE SPEC] ) LO = Lisp_Object -> LO Ffoo (LO bar, LO baz, LO ichi, LO ni, LO san, LO shi, LO go, LO roku, int numkeywords, LO *other_keywords) #define DEFUN_WITH_KEYWORDS (fun, funstr, minargs, maxargs, argspec, \ #args, num_keywords, keywordspec, keywords, intspec) \ LO fun (DWK_ARGS (maxargs, args) \ DWK_KEYWORDS (num_keywords, keywordspec, keywords)) #define DWK_KEYWORDS (num_keywords, keywordspec, keywords) \ DWK_KEYWORDS ## keywordspec (keywords) DWK_OTHER_KEYWORDS ## keywordspec) #define DWK_KEYWORDS_ALLOW_OTHER (x,y) DWK_KEYWORDS (x,y) #define DWK_KEYWORDS_ALLOW_OTHER_SPECIFICATIONS (x,y) DWK_KEYWORDS_SPECIFY_DEFAULT (x,y) #define DWK_KEYWORDS_SPECIFY_DEFAULT (numkey, key) ARGLIST_CAR ## numkey key #define ARGLT_GRZ (x,y) LO CAR x, LO CAR y @end example @node Future Work -- Property Interface Changes, Future Work -- Toolbars, Future Work -- Keyword Parameters, Future Work @section Future Work -- Property Interface Changes @cindex future work, property interface changes @cindex property interface changes, future work Author: @uref{mailto:ben@@xemacs.org,Ben Wing} In my past work on XEmacs, I already expanded the standard property functions of @code{get}, @code{put}, and @code{remprop} to work on objects other than symbols and defined an additional function @code{object-plist} for this interface. I'd like to expand this interface further and advertise it as the standard way to make property changes in objects, especially the new objects that are going to be defined in order to support the added user interface features of version 22. My proposed changes are as follows: @enumerate @item A new concept associated with each property called a @dfn{default value} is introduced. (This concept already exists, but not in a well-defined way.) The default value is the value that the property assumes for certain value retrieval functions such as @code{get} when it is @dfn{unbound}, which is to say that its value has not been explicitly specified. Note: the way to make a property unbound is to call @code{remprop}. Note also that for some built-in properties, setting the property to its default value is equivalent to making it unbound. @item The behavior of the @code{get} function is modified. If the @code{get} function is called on a property that is unbound and the third, optional @var{default} argument is @code{nil}, then the default value of the property is returned. If the @var{default} argument is not @code{nil}, then whatever was specified as the value of this argument is returned. For the most part, this is upwardly compatible with the existing definition of @code{get} because all user-defined properties have an initial default value of @code{nil}. Code that calls the @code{get} function and specifies @code{nil} for the @var{default} argument, and expects to get @code{nil} returned if the property is unbound, is almost certainly wrong anyway. @item A new function, @code{get1} is defined. This function does not take a default argument like the @code{get} function. Instead, if the property is unbound, an error is signaled. Note: @code{get} can be implemented in terms of @code{get1}. @item New functions @code{property-default-value} and @code{property-bound-p} are defined with the obvious semantics. @item An additional function @code{property-built-in-p} is defined which takes two arguments, the first one being a symbol naming an object type, and the second one specifying a property, and indicates whether the property name has a built-in meaning for objects of that type. @item It is not necessary, or even desirable, for all object types to allow user-defined properties. It is always possible to simulate user-defined properties for an object by using a weak hash table. Therefore, whether an object allows a user to define properties or not should depend on the meaning of the object. If an object does not allow user-defined properties, the @code{put} function should signal an error, such as @code{undefined-property}, when given any property other than those that are predefined. @item A function called @code{user-defined-properties-allowed-p} should be defined with the obvious semantics. (See the previous item.) @item Three more functions should be defined, called @code{built-in-property-name-list}, @code{property-name-list}, and @code{user-defined-property-name-list}. @end enumerate Another idea: @example (define-property-method predicate object-type predicate cons :(KEYWORD) (all lists beginning with KEYWORD) :put putfun :get :remprop :object-props :clear-properties :map-properties e.g. (define-property-method 'hash-table :put #'(lambda (obj key value) (puthash key obj value))) @end example @node Future Work -- Toolbars, Future Work -- Menu API Changes, Future Work -- Property Interface Changes, Future Work @section Future Work -- Toolbars @cindex future work, toolbars @cindex toolbars @menu * Future Work -- Easier Toolbar Customization:: * Future Work -- Toolbar Interface Changes:: @end menu @node Future Work -- Easier Toolbar Customization, Future Work -- Toolbar Interface Changes, Future Work -- Toolbars, Future Work -- Toolbars @subsection Future Work -- Easier Toolbar Customization @cindex future work, easier toolbar customization @cindex easier toolbar customization, future work Author: @uref{mailto:ben@@xemacs.org,Ben Wing} @strong{Abstract:} One of XEmacs' greatest strengths is its ability to be customized endlessly. Unfortunately, it is often too difficult to figure out how to do this. There has been some recent work like the Custom package, which helps in this regard, but I think there's a lot more work that needs to be done. Here are some ideas (which certainly could use some more thought). Although there is currently an @code{edit-toolbar} package, it is not well integrated with XEmacs, and in general it is much too hard to customize the way toolbars look. I would like to see an interface that works a bit like the way things work under Windows, where you can right-click on a toolbar to get a menu of options that allows you to change aspects of the toolbar. The general idea is that if you right-click on an item itself, you can do things to that item, whereas if you right-click on a blank part of a toolbar, you can change the properties of the toolbar. Some of the items on the right-click menu for a particular toolbar button should be specified by the button itself. Others should be standard. For example, there should be an @strong{Execute} item which simply does what would happen if you left-click on a toolbar button. There should probably be a @strong{Delete} item to get rid of the toolbar button and a @strong{Properties} item, which brings up a property sheet that allows you to do things like change the icon and the command string that's associated with the toolbar button. The options to change the appearance of the toolbar itself should probably appear both on the context menu for specific buttons, and on the menu that appears when you click on a blank part of the toolbar. That way, if there isn't a blank part of the toolbar, you can still change the toolbar appearance. As for what appears in these items, in Outlook Express, for example, there are three different menu items, one of which is called @strong{Buttons}, which brings up, or pops up a window which allows you to edit the toolbar, which for us could pop up a new frame, which is running @code{edit-toolbar.el}. The second item is called @strong{Align}, which contains a submenu that says @strong{Top}, @strong{Bottom}, @strong{Left}, and @strong{Right}, which will be just like setting the default toolbar position. The third one says @strong{Text Labels}, which would just let you select whether there are captions or not. I think all three of these are useful and are easy to implement in XEmacs. These things also need to be integrated with custom so that a user can control whether these options apply to all sessions, and in such a case can save the settings out to an options file. @code{edit-toolbar.el} in particular needs to integrate with custom. Currently it has some sort of hokey stuff of its own, which it saves out to a @code{.toolbar} file. Another useful option to have, once we draw the captions dynamically rather than using pre-generated ones, would be the ability to change the font size of the captions. I'm sure that Kyle, for one, would appreciate this. (This is incomplete.....) @node Future Work -- Toolbar Interface Changes, , Future Work -- Easier Toolbar Customization, Future Work -- Toolbars @subsection Future Work -- Toolbar Interface Changes @cindex future work, toolbar interface changes @cindex toolbar interface changes, future work Author: @uref{mailto:ben@@xemacs.org,Ben Wing} I propose changing the way that toolbars are specified to make them more flexible. @enumerate @item A new format for the vector that specifies a toolbar item is allowed. In this format, the first three items of the vector are required and are, respectively, a caption, a glyph list, and a callback. The glyph list and callback arguments are the same as in the current toolbar item specification, and the caption is a string specifying the caption text placed below the toolbar glyph. The caption text is required so that toolbar items can be identified for the purpose of retrieving and changing their property values. Putting the caption first also makes it easy to distinguish between the new and the old toolbar item vector formats. In the old format, the first item, the glyph list, is either a list or a symbol. In the new format, the first item is a string. In the new format, following the three required items, are optional keyword items specified using keywords in the same format as the menu item vector format. The keywords that should be predefined are: @code{:help-echo}, @code{:context-menu}, @code{:drop-handlers}, and @code{:enabled-p}. The @code{:enabled-p} and @code{:help-echo} keyword arguments are the same as the third and fourth items in the old toolbar item vector format. The @code{:context-menu} keyword is a list in standard menu format that specifies additional items that will appear when the context menu for the toolbar item is popped up. (Typically, this happens when the right mouse button is clicked on the toolbar item). The @code{:drop-handlers} keyword is for use by the new drag-n-drop interface (see @uref{drag-n-drop.html,Drag-n-Drop Interface Changes} ), and is not normally specified or modified directly. @item Conceivably, there could also be keywords that are associated with a toolbar itself, rather than with a particular toolbar item. These keyword properties would be specified using keywords and arguments that occur before any toolbar item vectors, similarly to how things are done in menu specifications. Possible properties could include @code{:captioned-p} (whether the captions are visible under the toolbar), @code{:glyphs-visible-p} (whether the toolbar glyphs are visible), and @code{:context-menu} (additional items that will appear on the context menus for all toolbar items and additionally will appear on the context menu that is popped up when the right mouse button is clicked over a portion of the toolbar that does not have any toolbar buttons in it). The current standard practice with regards to such properties seems to be to have separate specifiers, such as @code{left-toolbar-width}, @code{right-toolbar-width}, @code{left-toolbar-visible-p}, @code{right-toolbar-visible-p}, etc. It could easily be argued that there should be no such toolbar specifiers and that all such properties should be part of the toolbar instantiator itself. In this scheme, the only separate specifiers that would exist for individual properties would be default values. There are a lot of reasons why an interface change like this makes sense. For example, currently when VM sets its toolbar, it also sets the toolbar width and similar properties. If you change which edge of the frame the VM toolbar occurs in, VM will also have to go and modify all of the position-specific toolbar specifiers for all of the other properties associated with a toolbar. It doesn't really seem to make sense to me for the user to be specifying the width and visibility and such of specific toolbars that are attached to specific edges because the user should be free to move the toolbars around and expect that all of the toolbar properties automatically move with the toolbar. (It is also easy to imagine, for example, that a toolbar might not be attached to the edge of the frame at all, but might be floating somewhere on the user's screen). With an interface where these properties are separate specifiers, this has to be done manually. Currently, having the various toolbar properties be inside of toolbar instantiators makes them difficult to modify, but this will be different with the API that I propose below. @item I propose an API for modifying toolbar and toolbar item properties, as well as making other changes to toolbar instantiators, such as inserting or deleting toolbar items. This API is based around the concept of a path. There are two kinds of paths here -- @dfn{toolbar paths} and @dfn{toolbar item paths}. Each kind of path is an object (of type @code{toolbar-path} and @code{toolbar-item-path}, respectively) whose properties specify the location in a toolbar instantiator where changes to the instantiator can be made. A toolbar path, for example, would be created using the @code{make-toolbar-path} function, which takes a toolbar specifier (or optionally, a symbol, such as @code{left}, @code{right}, @code{default}, or @code{nil}, which refers to a particular toolbar), and optionally, parameters such as the locale and the tag set, which specify which actual instantiator inside of the toolbar specifier is to be modified. A toolbar item path is created similarly using a function called @code{make-toolbar-item-path}, which takes a toolbar specifier and a string naming the caption of the toolbar item to be modified, as well as, of course, optionally the locale and tag set parameters and such. The usefulness of these path objects is as arguments to functions that will use them as pointers to the place in a toolbar instantiator where the modification should be made. Recall, for example, the generalized property interface described above. If a function such as @code{get} or @code{put} is called on a toolbar path or toolbar item path, it will use the information contained in the path object to retrieve or modify a property located at the end of the path. The toolbar path objects can also be passed to new functions that I propose defining, such as @code{add-toolbar-item}, @code{delete-toolbar-item}, and @code{find-toolbar-item}. These functions should be parallel to the functions for inserting, deleting, finding, etc. items in a menu. The toolbar item path objects can also be passed to the drop-handler functions defined in @uref{drag-n-drop.html,Drag-n-Drop Interface Changes} to retrieve or modify the drop handlers that are associated with a toolbar item. (The idea here is that you can drag an object and drop it onto a toolbar item, just as you could onto a buffer, an extent, a menu item, or any other graphical element). @item We should at least think about allowing for separate default and buffer-local toolbars. The user should either be able to position these toolbars one above the other, or side by side, occupying a single toolbar line. In the latter case, the boundary between the toolbars should be draggable, and if a toolbar takes up more room than is allocated for it, there should be arrows that appear on one or both sides of the toolbar so that the items in the toolbar can be scrolled left or right. (For that matter, this sort of interface should exist even when there is only one toolbar that is on a particular toolbar line, because the toolbar may very well have more items than can be displayed at once, and it's silly in such a case if it's impossible to access the items that are not currently visible). @item The default context menu for toolbars (which should be specified using a specifier called @code{default-toolbar-context-menu} according to the rules defined above) should contain entries allowing the user to modify the appearance of a toolbar. Entries would include, for example, whether the toolbar is captioned, whether the glyphs for the toolbar are visible (if the toolbar is captioned but its glyphs are not visible, the toolbar appears as nothing but text; you can set things up this way, for example, in Netscape), an option that brings up a package for editing the contents of a toolbar, an option to allow the caption face to be dchanged (perhaps thorough jan @code{edit-faces} or @code{custom} interface), etc. @end enumerate @node Future Work -- Menu API Changes, Future Work -- Removal of Misc-User Event Type, Future Work -- Toolbars, Future Work @section Future Work -- Menu API Changes @cindex future work, menu API changes @cindex menu API changes, future work Author: @uref{mailto:ben@@xemacs.org,Ben Wing} @enumerate @item I propose making a specifier for the menubar associated with the frame. The specifier should be called @code{default-menubar} and should replace the existing @code{current-menubar} variable. This would increase the power of the menubar interface and bring it in line with the toolbar interface. (In order to provide proper backward compatibility, we might have to @uref{symbol-value-handlers.html,complete the symbol value handler mechanism}) @item I propose an API for modifying menu instantiators similar to the API composed above for toolbar instantiators. A new object called a @dfn{menu path} (of type @code{menu-path}) can be created using the @code{make-menu-path} function, and specifies a location in a particular menu instantiator where changes can be made. The first argument to @code{make-menu-path} specifies which menu to modify and can be a specifier, a value such as @code{nil} (which means to modify the default menubar associated with the selected frame), or perhaps some other kind of specification referring to some other menu, such as the context menus invoked by the right mouse button. The second argument to @code{make-menu-path}, also required, is a list of zero or more strings that specifies the particular menu or menu item in the instantiator that is being referred to. The remaining arguments are optional and would be a locale, a tag set, etc. The menu path object can be passed to @code{get}, @code{put} or other standard property functions to access or modify particular properties of a menu or a menu item. It can also be passed to expanded versions of the existing functions such as @code{find-menu-item}, @code{delete-menu-item}, @code{add-menu-button}, etc. (It is really a shame that @code{add-menu-item} is an obsolete function because it is a much better name than @code{add-menu-button}). Finally, the menu path object can be passed to the drop-handler functions described in @uref{drag-n-drop.html,Drag-n-Drop Interface Changes} to access or modify the drop handlers that are associated with a particular menu item. @item New keyword properties should be added to the menu item vector. These include @code{:help-echo}, @code{:context-menu} and @code{:drop-handlers}, with similar semantics to the corresponding keywords for toolbar items. (It may seem a bit strange at first to have a context menu associated with a particular menu item, but it is a user interface concept that exists both in Open Look and in Windows, and really makes a lot of sense if you give it a bit of thought). These properties may not actually be implemented at first, but at least the keywords for them should be defined. @end enumerate @node Future Work -- Removal of Misc-User Event Type, Future Work -- Mouse Pointer, Future Work -- Menu API Changes, Future Work @section Future Work -- Removal of Misc-User Event Type @cindex future work, removal of misc-user event type @cindex removal of misc-user event type, future work Author: @uref{mailto:ben@@xemacs.org,Ben Wing} @strong{Abstract:} This page describes why the misc-user event type should be split up into a number of different event types, and how to do this. The misc-user event should not exist as a single event type. It should be split up into a number of different event types: one for scrollbar events, one for menu events, and one or two for drag-n-drop events. Possibly there will be other event types created in the future. The reason for this is that the misc-user event was a bad design choice when I made it, and it has only gotten worse with Oliver's attempts to add features to it to make it be used for drag-n-drop. I know that there was originally a separate drag-n-drop event type, and it was folded into the misc-user event type on my recommendation, but I have now realized the error of my ways. I had originally created a single event type in an attempt to prevent some Lisp programs from breaking because they might have a case statement over various event types, and would not be able to handle new event types appearing. I think now that these programs simply need to be written in a way to handle new event types appearing. It's not very hard to do this. You just use predicates instead of doing a case statement over the event type. If we preserve the existing predicate called @code{misc-user-event-p}, and just make sure that it evaluates to true when given any user event type other than the standard simple ones, then most existing code will not break either when we split the event types up like this, or if we add any new event types in the future. More specifically, the only clean way to design the misc-user event type would be to add a sub-type field to it, and then have the nature of all the other fields in the event type be dependent on this sub-type. But then in essence, we'd just be reimplementing the whole event-type scheme inside of misc-user events, which would be rather pointless. @node Future Work -- Mouse Pointer, Future Work -- Extents, Future Work -- Removal of Misc-User Event Type, Future Work @section Future Work -- Mouse Pointer @cindex future work, mouse pointer @cindex mouse pointer, future work @menu * Future Work -- Abstracted Mouse Pointer Interface:: * Future Work -- Busy Pointer:: @end menu @node Future Work -- Abstracted Mouse Pointer Interface, Future Work -- Busy Pointer, Future Work -- Mouse Pointer, Future Work -- Mouse Pointer @subsection Future Work -- Abstracted Mouse Pointer Interface @cindex future work, abstracted mouse pointer interface @cindex abstracted mouse pointer interface, future work Author: @uref{mailto:ben@@xemacs.org,Ben Wing} @strong{Abstract:} We need to create a new image format that allows standard pointer shapes to be specified in a way that works on all Windows systems. I suggest that this be called @code{pointer}, which has one tag associated with it, named @code{:data}, and whose value is a string. The possible strings that can be specified here are predefined by XEmacs, and are guaranteed to work across all Windows systems. This means that we may need to provide our own definition for pointer shapes that are not standard on some systems. In particular, there are a lot more standard pointer shapes under X than under Windows, and most of these pointer shapes are fairly useful. There are also a few pointer shapes (I think the hand, for example) on Windows, but not on X. Converting the X pointer shapes to Windows should be easy because the definitions of the pointer shapes are simply XBM files, which we can read under Windows. Going the other way might be a little bit more difficult, but it should still not be that hard. While we're at it, we should change the image format currently called @code{cursor-font} to @code{x-cursor-font}, because it only works under X Windows. We also need to change the format called @code{resource} to be @code{mswindows-resource}. At least in the case of @code{cursor-font}, the old value should be maintained for compatibility as an obsolete alias. The @code{resource} format was added so recently that it's possible that we can just change it. @node Future Work -- Busy Pointer, , Future Work -- Abstracted Mouse Pointer Interface, Future Work -- Mouse Pointer @subsection Future Work -- Busy Pointer @cindex future work, busy pointer @cindex busy pointer, future work Author: @uref{mailto:ben@@xemacs.org,Ben Wing} Automatically make the mouse pointer switch to a busy shape (watch signal) when XEmacs has been "busy" for more than, e.g. 2 seconds. Define the @dfn{busy time} as the time since the last time that XEmacs was ready to receive input from the user. An implementation might be: @enumerate @item Set up an asynchronous timeout, to signal after the busy time; these are triggered through a call to QUIT so they will be triggered even when the code is busy doing something. @item We already have an "emacs_is_blocking" flag when we are waiting for input. In the same place, when we are about to block and wait for input (regardless of whether input is already present), maybe call a hook, which in this case would remove the timer and put back the normal mouse shape. Then when we exit the blocking stage (we got some input), call another hook, which in this case will start the timer. Note that we don't want these "blocking" hooks to be triggered just because of an accept-process-output or some similar thing that retrieves events, only to put them back onto a queue for later processing. Maybe we want some sort of flag that's bound by those routines saying that we aren't really waiting for input. Making that flag Lisp-accessible allows it to be set by similar sorts of Lisp routines (if there are any?) that loop retrieving events but defer them, or only drain the queue, or whatnot. #### Think about whether it would make some sense to try and be more clever in our determinations of what counts as "real waiting for user input", e.g. whether the event gets dispatched (unfortunately this occurs way too late, we want to know to remove the busy cursor @strong{before} getting an event), maybe whether there are any events waiting to be processed or we'll truly block, etc. (e.g. one possibility if there is input on the queue already when we "block" for input, don't remove the busy- wait pointer, but trigger the removal of it when we dispatch a user event). @end enumerate @node Future Work -- Extents, Future Work -- Version Number and Development Tree Organization, Future Work -- Mouse Pointer, Future Work @section Future Work -- Extents @cindex future work, extents @cindex extents, future work @menu * Future Work -- Everything should obey duplicable extents:: @end menu @node Future Work -- Everything should obey duplicable extents, , Future Work -- Extents, Future Work -- Extents @subsection Future Work -- Everything should obey duplicable extents @cindex future work, everything should obey duplicable extents @cindex everything should obey duplicable extents, future work Author: @uref{mailto:ben@@xemacs.org,Ben Wing} A lot of functions don't properly track duplicable extents. For example, the @code{concat} function does, but the @code{format} function does not, and extents in keymap prompts are not displayed either. All of the functions that generate strings or string-like entities should track the extents that are associated with the strings. Currently this is difficult because there is no general mechanism implemented for doing this. I propose such a general mechanism, which would not be hard to implement, and would be easy to use in other functions that build up strings. The basic idea is that we create a C structure that is analogous to a Lisp string in that it contains string data and lists of extents for that data. Unlike standard Lisp strings, however, this structure (let's call it @code{lisp_string_struct}) can be incrementally updated and its allocation is handled explicitly so that no garbage is generated. (This is important for example, in the event-handling code which would want to use this structure, but needs to not generate any garbage for efficiency reasons). Both the string data and the list of extents in this string are handled using dynarrs so that it is easy to incrementally update this structure. Functions should exist to create and destroy instances of @code{lisp_string_struct} to generate a Lisp string from a @code{lisp_string_struct} and vice-versa to append a sub-string of a Lisp string to a @code{lisp_string_struct}, to just append characters to a @code{lisp_string_struct}, etc. The only thing possibly tricky about implementing these functions is implementing the copying of extents from a Lisp string into a @code{lisp_string_struct}. However, there is already a function @code{copy_string_extents()} that does basically this exact thing, and it should be easy to create a modified version of this function. @node Future Work -- Version Number and Development Tree Organization, Future Work -- Improvements to the @code{xemacs.org} Website, Future Work -- Extents, Future Work @section Future Work -- Version Number and Development Tree Organization @cindex future work, version number and development tree organization @cindex version number and development tree organization, future work Author: @uref{mailto:ben@@xemacs.org,Ben Wing} @strong{Abstract:} The purpose of this proposal is to present a coherent plan for how development branches in XEmacs are managed. This will cover such issues as stable versus experimental branches, creating new branches, synchronizing patches between branches, and how version numbers are assigned to branches. A development branch is defined to be a linear series of releases of the XEmacs code base, each of which is derived from the previous one. When the XEmacs development tree is forked and two branches are created where there used to be one, the branch that is intended to be more stable and have fewer changes made to it is considered the one that inherits the parent branch, and the other branch is considered to have begun at the branching point. The less stable of the two branches will eventually be forked again, while this will not happen usually to the more stable of the two branches, and its development will eventually come to an end. This means that every branch has a definite ending point. For example, the 20.x branch began at the point when the released 19.13 code tree was split into a 19.x and a 20.x branch, and a 20.x branch will end when the last 20.x release (probably numbered 20.5 or 20.6) is released. I think that there should always be three active development branches at any time. These branches can be designated the stable, the semi-stable, and the experimental branches. This situation has existed in the current code tree as soon as the 21.0 development branch was split. In this situation, the stable branch is the 20.x series. The semi-stable branch is the 21.0 release and the stability releases that follow. The experimental branch is the branch that was created as the result of the 21.0 development branch split. Typically, the stable branch has been released for a long period of time. The semi-stable branch has been released for a short period of time, or is about to be released, and the experimental branch has not yet been released, and will probably not be released for awhile. The conditions that should hold in all circumstances are: @enumerate @item There should be three active branches. @item The experimental branch should never be in feature freeze. @end enumerate The reason for the second condition is to ensure that active development can always proceed and is never throttled, as is happening currently at the end of the 21.0 release cycle. What this means is that as soon as the experimental branch is deemed to be stable enough to go into feature freeze: @enumerate @item The current stable branch is made inactive and all further development on it ceases. @item The semi-stable branch, which by now should have been released for a fair amount of time, and should be fairly stable, gets renamed to the stable branch. @item The experimental branch is forked into two branches, one of which becomes the semi-stable branch, and the other, the experimental branch. @end enumerate The stable branch is always in high resistance, which is to say that the only changes that can be made to the code are important bug fixes involving a small amount of code where it should be clear just by reading the code that no destabilizing code has been introduced. The semi-stable branch is in low resistance, which means that no major features can be added, but except right before a release fairly major code changes are allowed. Features can be added if they are sufficiently small, if they are deemed sufficiently critical due to severe problems that would exist if the features were not added (for example, replacement of the unexec mechanism with a portable solution would be a feature that could be added to the semi-stable branch provided that it did not involve an overly radical code re-architecture, because otherwise it might be impossible to build XEmacs on some architectures or with some compilers), or if the primary purpose of the new feature is to remedy an incompleteness in a recent architectural change that was not finished in a prior release due to lack of time (for example, abstracting the mouse pointer and list-of-colors interfaces, which were left out of 21.0). There is no feature resistance in place in the experimental branch, which allows full development to proceed at all times. In general, both the stable and semi-stable branches will contain previous net releases. In addition, there will be beta releases in all three branches, and possibly development snapshots between the beta releases. It's obviously necessary to have a good version numbering scheme in order to keep everything straight. First of all, it needs to be immediately clear from the version number whether the release is a beta release or a net release. Steve has proposed getting rid of the beta version numbering system, which I think would be a big mistake. Furthermore, the net release version number and beta release version number should be kept separate, just as they are now, to make it completely clear where any particular release stands. There may be alternate ways of phrasing a beta release other than something like 21.0 beta 34, but in all such systems, the beta number needs to be zero for any release version. Three possible alternative systems, none of which I like very much, are: @enumerate @item The beta number is simply an extra number in the regular version number. Then, for example, 21.0 beta 34 becomes 21.0.34. The problem is that the release version, which would simply be called 21.0, appears to be earlier than 21.0 beta 34. @item The beta releases appear as later revisions of earlier releases. Then, for example, 21.1 beta 34 becomes 21.0.34, and 21.0 beta 34 would have to become 21.-1.34. This has both the obvious ugliness of negative version numbers and the problem that it makes beta releases appear to be associated with their previous releases, when in fact they are more closely associated with the following release. @item Simply make the beta version number be negative. In this scheme, you'd start with something like -1000 as the first beta, and then 21.0 beta 34 would get renumbered to 21.0.-968. Obviously, this is a crazy and convoluted scheme as well, and we would be best to avoid it. @end enumerate Currently, the between-beta snapshots are not numbered, but I think that they probably should be. If appropriate scripts are handled to automate beta release, it should be very easy to have a version number automatically updated whenever a snapshot is made. The number could be added either as a separate snapshot number, and you'd have 21.0 beta 34 pre 1, which becomes before 21.0 beta 34; or we could make the beta number be floating point, and then the same snapshot would have to be called 21.0 beta 33.1. The latter solution seems quite kludgey to me. There also needs to be a clear way to distinguish, when a net release is made, which branch the release is a part of. Again, three solutions come to mind: @enumerate @item The major version number reflects which development branch the release is in and the minor version number indicates how many releases have been made along this branch. In this scheme, 21.0 is always the first release of the 21 series development branch, and when this branch is split, the child branch that becomes the experimental branch gets version numbers starting with 22. This scheme is the simplest, and it's the one I like best. @item We move to a three-part version number. In this scheme, the first two numbers indicate the branch, and the third number indicates the release along the branch. In this scheme, we have numbers like 21.0.1, which would be the second release in the 21.0 series branch, and 21.1.2, which would be the third release in the 21.1 series branch. The major version number then gets increased only very occasionally, and only when a sufficiently major architectural change has been made, particularly one that causes compatibility problems with code written for previous branches. I think schemes like this are unnecessary in most circumstances, because usually either the major version number ends up changing so often that the second number is always either zero or one, or the major version number never changes, and as such becomes useless. By the time the major version number would change, the product itself has changed so much that it often gets renamed. Furthermore, it is clear that the two version number scheme has been used throughout most of the history of Emacs, and recently we have been following the two number scheme also. If we introduced a third revision number, at this point it would both confuse existing code that assumed there were two numbers, and would look rather silly given that the major version number is so high and would probably remain at the same place for quite a long time. @item A third scheme that would attempt to cross the two schemes would keep the same concept of major version number as for the three number scheme, and would compress the second and third numbers of the three number scheme into one number by using increments of ten. For example, the current 21.x branch would have releases No. 21.0, 21.1, etc. The next branch would be No. 21.10, 21.11, etc. I don't like this scheme very much because it seems rather kludgey, and also because it is not used in any other product as far as I know. @item Another scheme that would combine the second and third numbers in the three number scheme would be to have the releases in the current 21.x series be numbered 21.0, then 21.01, then 22.02, etc. The next series is 21.1, then 21.11, then 21.12, etc. This is similar to the way that version numbers are done for DOS in Windows. I also think that this scheme is fairly silly because, like the previous scheme, its only purpose is to avoid increasing the major version number very much. But given that we have already have a fairly large major version number, there doesn't seem to be any particular problem with increasing this number by one every year or two. Some people will object that by doing this, it becomes impossible to tell when a change is so major that it causes a lot of code breakage, but past releases have not been accurate indicators of this. For example, 19.12 caused a lot of code breakage, but 20.0 caused less, and 21.0 caused less still. In the GNU Emacs world, there were byte code changes made between 19.28 and 19.29, but as far as I know, not between 19.29 and 20.0. @end enumerate With three active development branches, synchronizing code changes between the branches is obviously somewhat of a problem. To make things easier, I propose a few general guidelines: @enumerate @item Merging between different branches need not happen that often. It should not happen more often than necessary to avoid undue burden on the maintainer, but needs to be done at all defined checkpoints. These checkpoints need to be noted in all of the places that track changes along the branch, for example, in all of the change logs and in all of the CVS tags. @item Every code change that can be considered a self-contained unit, no matter how large or small, needs to have a change log entry, preferably a single change log entry associated with it. This is an absolute requirement. There should be no code changes without an associated change log entry. Otherwise, it is highly likely that patches will not be correctly synchronized across all versions, and will get lost. There is no need for change log entries to contain unnecessary detail though, and it is important that there be no more change log entries than necessary, which means that two or more change log entries associated with a single patch need to be grouped together if possible. This might imply that there should be one global change log instead of change logs in each directory, or at the very least, the number of separate change logs should be kept to a minimum. @item The patch that is associated with each change log entry needs to be kept around somewhere. The reason for this is that when synchronizing code from some branch to some earlier branch, it is necessary to go through each change log entry and decide whether a change is worthy to make it into a more stable branch. If so, the patch associated with this change needs to be individually applied to the earlier branch. @item All changes made in more stable branches get merged into less stable branches unless the change really is completely unnecessary in the less stable branch because it is superseded by some other change. This will probably mean more developers making changes to the semi-stable branch than to the experimental branch. This means that developers should strive to do their development in the most stable branch that they expect their code to go into. An alternative to this which is perhaps more workable is simply to insist that all developers make all patches based off of the experimental branch, and then later merge these patches down to the more stable branches as necessary. This means, however, that submitted patches should never be combinations of two or more unrelated changes. Whenever such patches are submitted, they should either be rejected (which should apply to anybody who should know better, which probably means everybody on the beta list and anybody else who is a regular contributor), or the maintainer or some other designated party needs to filter the combined patch into separate patches, one per logical change. @item The maintainer should keep all the patches around in some data base, and the patches should be given an identifier consisting of the author of the patch, the date the patch was submitted, and some other identifying characteristic, such as a number, in case there is more than one patch on the same date by the same author. The database should hopefully be correctly marked at all times with something indicating which branches the patch has been applied to, and this database should hopefully be publicly visible so that patch authors can determine whether their patches have been applied, and whether their patches have been received, so that patches do not get needlessly resubmitted. @item Global automatable changes such as textual renaming, reordering, and additions or deletions of parameters in function calls should still be allowed, even with multiple development branches. (Sometimes these are necessary for code cleanliness, and in the long run, they save a lot of time, even through they may cause some headaches in the short-term.) In general, when such changes are made, they should occur in a separate beta version that contains only such changes and no other patches, and the changes should be made in both the semi-stable and experimental branches at the same time. The description of the beta version should make it very clear that the beta is comprised of such changes. The reason for doing these things is to make it easier for people to diff between beta versions in order to figure out the changes that were made without the diff getting cluttered up by these code cleanliness changes that don't change any actual behavior. @end enumerate @node Future Work -- Improvements to the @code{xemacs.org} Website, Future Work -- Keybindings, Future Work -- Version Number and Development Tree Organization, Future Work @section Future Work -- Improvements to the @code{xemacs.org} Website @cindex future work, improvements to the @code{xemacs.org} website @cindex improvements to the @code{xemacs.org} website, future work Author: @uref{mailto:ben@@xemacs.org,Ben Wing} The @code{xemacs.org} web site is the face that XEmacs presents to the outside world. In my opinion, its most important function is to present information about XEmacs in such a way that solicits new XEmacs users and co-contributors. Existing members of the XEmacs community can probably find out most of the information they want to know about XEmacs regardless of what shape the web site is in, or for that matter, perhaps even if the web site doesn't exist at all. However, potential new users and co-contributors who go to the XEmacs web site and find it out of date and/or lacking the information that they need are likely to be turned away and may never return. For this reason, I think it's extremely important that the web site be up-to-date, well-organized, and full of information that an inquisitive visitor is likely to want to know. The current XEmacs web site needs a lot of work if it is to meet these standards. I don't think it's reasonable to expect one person to do all of this work and make continual updates as needed, especially given the dismal record that the XEmacs web site has had. The proper thing to do is to place the web site itself under CVS and allow many of the core members to remotely check files in and out. This way, for example, Steve could update the part of the site that contains the current release status of XEmacs. (Much of this could be done by a script that Steve executes when he sends out a beta release announcement which automatically HTML-izes the mail message and puts it in the appropriate place on the web site. There are programs that are specifically designed to convert email messages into HTML, for example @code{mhonarc}.) Meanwhile, the @code{xemacs.org} mailing list administrator (currently Jason Mastaler, I think) could maintain the part of the site that describes the various mailing lists and other addresses at @code{xemacs.org}. Someone like me (perhaps through a proxy typist) could maintain the part of the site that specifies the future directions that XEmacs is going in, etc., etc. Here are some things that I think it's very important to add to the web site. @enumerate @item A page describing in detail how to get involved in the XEmacs development process, how to submit and where to submit various patches to the XEmacs core or associated packages, how to contact the maintainers and core developers of XEmacs and the maintainers of various packages, etc. @item A page describing exactly how to download, compile, and install XEmacs, and how to download and install the various binary distributions. This page should particularly cover in detail how exactly the package system works from an installation standpoint and how to correctly compile and install under Microsoft Windows and Cygwin. This latter section should cover what compilers are needed under Microsoft Windows and Cygwin, and how to get and install the Cygwin components that are needed. @item A page describing where to get the various ancillary libraries that can be linked with XEmacs, such as the JPEG, TIFF, PNG, X-Face, DBM, and other libraries. This page should also cover how to correctly compile it and install these libraries, including under Microsoft Windows (or at least it should contain pointers to where this information can be found). Also, it should describe anything that needs to be specified as an option to @code{configure} in order for XEmacs to link with and make use of these libraries or of Motif or CDE. Finally, this page should list which versions of the various libraries are required for use with the various different beta versions of XEmacs. (Remember, this can change from beta to beta, and someone needs to keep a watchful eye on this). @item Pointers to any other sites containing information on XEmacs. This would include, for example, Hrvoje's XEmacs on Windows FAQ and my Architecting XEmacs web site. (Presumably, most of the information in this section will be temporary. Eventually, these pages should be integrated into the main XEmacs web site). @item A page listing the various sub-projects in the XEmacs development process and who is responsible for each of these sub-projects, for example development of the package system, administration of the mailing lists, maintenance of stable XEmacs versions, maintenance of the CVS web interface, etc. This page should also list all of the packages that are archived at @code{xemacs.org} and who is the maintainer or maintainers for each of these packages. @end enumerate @subheading Other Places with an XEmacs Presence We should try to keep an XEmacs presence in all of the major places on the web that are devoted to free software or to the "open source" community. This includes, for example, the open source web site at @uref{../../opensource.oreilly.com/default.htm,http://opensource.oreilly.com} (I'm already in the process of contacting this site), the Freshmeat site at @uref{../../www.freshmeat.net/default.htm,http://www.freshmeat.net}, the various announcement news groups (for example, @uref{news:comp.os.linux.announce,comp.os.linux.announce}, and the Windows announcement news group) etc. @node Future Work -- Keybindings, Future Work -- Byte Code Snippets, Future Work -- Improvements to the @code{xemacs.org} Website, Future Work @section Future Work -- Keybindings @cindex future work, keybindings @cindex keybindings, future work @menu * Future Work -- Keybinding Schemes:: * Future Work -- Better Support for Windows Style Key Bindings:: * Future Work -- Misc Key Binding Ideas:: @end menu @node Future Work -- Keybinding Schemes, Future Work -- Better Support for Windows Style Key Bindings, Future Work -- Keybindings, Future Work -- Keybindings @subsection Future Work -- Keybinding Schemes @cindex future work, keybinding schemes @cindex keybinding schemes, future work Author: @uref{mailto:ben@@xemacs.org,Ben Wing} @strong{Abstract:} We need a standard mechanism that allows a different global key binding schemes to be defined. Ideally, this would be the @uref{keyboard-actions.html,keyboard action interface} that I have proposed, however this would require a lot of work on the part of mode maintainers and other external Elisp packages and will not be rady in the short term. So I propose a very kludgy interface, along the lines of what is done in Viper currently. Perhaps we can rip that key munging code out of Viper and make a separate extension that implements a global key binding scheme munging feature. This way a key binding scheme could rearrange all the default keys and have all sorts of other code, which depends on the standard keys being in their default location, still work. @node Future Work -- Better Support for Windows Style Key Bindings, Future Work -- Misc Key Binding Ideas, Future Work -- Keybinding Schemes, Future Work -- Keybindings @subsection Future Work -- Better Support for Windows Style Key Bindings @cindex future work, better support for windows style key bindings @cindex better support for windows style key bindings, future work Author: @uref{mailto:ben@@xemacs.org,Ben Wing} @strong{Abstract:} This page describes how we could create an XEmacs extension that modifies the global key bindings so that a Windows user would feel at home when using the keyboard in XEmacs. Some of these bindings don't conflict with standard XEmacs keybindings and should be added by default, or at the very least under Windows, and probably under X Windows as well. Other key bindings would need to be implemented in a Windows compatibility extension which can be enabled and disabled on the fly, following the conventions outlined in @uref{enabling-extensions.html,Standard interface for enabling extensions} Ideally, this should be implemented using the @uref{keyboard-actions.html,keyboard action interface} but these wil not be available in the short term, so we will have to resort to some awful kludges, following the model of Michael Kifer's Viper mode. We really need to make XEmacs provide standard Windows key bindings as much as possible. Currently, for example, there are at least two packages that allow the user to make a selection using the shifted arrow keys, and neither package works all that well, or is maintained. There should be one well-written piece of code that does this, and it should be a standard part of XEmacs. In fact, it should be turned on by default under Windows, and probably under X as well. (As an aside here, one point of contention in how to implement this involves what happens if you select a region using the shifted arrow keys and then hit the regular arrow keys. Does the region remain selected or not? I think there should be a variable that controls which of these two behaviors you want. We can argue over what the default value of this variable should be. The standard Windows behavior here is to keep the region selected, but move the insertion point elsewhere, which is unfortunately impossible to implement in XEmacs.) Some thought should be given to what to do about the standard Windows control and alt key bindings. Under NTEmacs, there is a variable that controls whether the alt key behaves like the Emacs meta key, or whether it is passed on to the menu as in standard Windows programs. We should surely implement this and put this option on the @strong{Options} menu. Making @kbd{Alt-f} for example, invoke the @strong{File} menu, is not all that disruptive in XEmacs, because the user can always type @kbd{ESC f} to get the meta key functionality. Making @kbd{Control-x}, for example, do @strong{Cut}, is much, much more problematic, of course, but we should consider how to implement this anyway. One possibility would be to move all of the current Emacs control key bindings onto control-shift plus a key, and to make the simple control keys follow the Windows standard as much as possible. This would mean, for example, that we would have the following keybindings:@* @kbd{Control-x} ==> @strong{Cut} @* @kbd{Control-c} ==> @strong{Copy} @* @kbd{Control-v} ==> @strong{Paste} @* @kbd{Control-z} ==> @strong{Undo}@* @kbd{Control-f} ==> @strong{Find} @* @kbd{Control-a} ==> @strong{Select All}@* @kbd{Control-s} ==> @strong{Save}@* @kbd{Control-p} ==> @strong{Print}@* @kbd{Control-y} ==> @strong{Redo}@* (this functionality @emph{is} available in XEmacs with Kyle Jones' @code{redo.el} package, but it should be better integrated)@* @kbd{Control-n} ==> @strong{New} @* @kbd{Control-o} ==> @strong{Open}@* @kbd{Control-w} ==> @strong{Close Window}@* The changes described in the previous paragraph should be put into an extension named @code{windows-keys.el} (see @uref{enabling-extensions.html,Standard interface for enabling extensions}) so that it can be enabled and disabled on the fly using a menu item and can be selected as the default for a particular user in their custom options file. Once this is implemented, the Windows installer should also be modified so that it brings up a dialog box that allows the user to make a selection of which key binding scheme they would prefer as the default, either the XEmacs standard bindings, Vi bindings (which would be Viper mode), Windows-style bindings, Brief, CodeWright, Visual C++, or whatever we manage to implement. @node Future Work -- Misc Key Binding Ideas, , Future Work -- Better Support for Windows Style Key Bindings, Future Work -- Keybindings @subsection Future Work -- Misc Key Binding Ideas @cindex future work, misc key binding ideas @cindex misc key binding ideas, future work Author: @uref{mailto:ben@@xemacs.org,Ben Wing} @itemize @item M-123 ... do digit arg @item However, M-( group commands together until M-) @item Nested M-() are allowed. @item Number repeating plus () repeats N times each group of commands as a unit. @item M-() by itself forms an anonymous macro, and there should be a command to repeat, like VI (execute macro), but when no () before, it repeats the last command of same amount of complication - or more like, somewhere there is a repeats all command back to make to act that stopping like VI's dot command. @item C-numbers switches to a particular window. maybe 1-3 or 1-4 does this. @item C-4 or 5 to 9 (or ()? maybe reserved) switches to a particular frame. @item Possibly C-Sh-numbers select more windows or frames. @item M-C-1 M-C-2 M-C-3 M-C-4 M-C-5 M-C-6 M-C-7 M-C-8 M-C-9 M-C-0 maybe should be execute anonymous macros (other possibility is insert register but you can easily simulate with a keyboard macro) @item What about C-S M-C-S M-S?? @item I think there should be default fun key binding for @strong{ILLEGIBLE} similar to what I have - load, save, cut, copy, paste, kill line, start/end macro, do macro @end itemize @node Future Work -- Byte Code Snippets, Future Work -- Lisp Stream API, Future Work -- Keybindings, Future Work @section Future Work -- Byte Code Snippets @cindex future work, byte code snippets @cindex byte code snippets, future work Author: @uref{mailto:ben@@xemacs.org,Ben Wing} @itemize @item For use in time critical (e.g. redisplay) places such as display tables - a simple piece of code is evalled, e.g. @example (int-to-char (1+ c)) @end example where c is the arg, specbound. @item can be compiled like @example (byte-compile-snippet (int-to-char (1+ c)) (c)) ^^^ environment of local vars @end example @item need eval with bindings (not hard to implement) (extendable when lexical scoping present) @item What's the return value of byte-compile-snippet? (Look to see how this might be implemented) @end itemize @menu * Future Work -- Autodetection:: * Future Work -- Conversion Error Detection:: * Future Work -- Unicode:: * Future Work -- BIDI Support:: * Future Work -- Localized Text/Messages:: @end menu @node Future Work -- Autodetection, Future Work -- Conversion Error Detection, Future Work -- Byte Code Snippets, Future Work -- Byte Code Snippets @subsection Future Work -- Autodetection @cindex future work, autodetection @cindex autodetection, future work There are various proposals contained here. @subheading New Implementation of Autodetection Mechanism Author: @uref{mailto:ben@@xemacs.org,Ben Wing} The current auto detection mechanism in XEmacs Mule has many problems. For one thing, it is wrong too much of the time. Another problem, although easily fixed, is that priority lists are fixed rather than varying, depending on the particular locale; and finally, it doesn't warn the user when it's not sure of the encoding or when there's a mistake made during decoding. In both of these situations the user should be presented with a list of likely encodings and given the choice, rather than simply proceeding anyway and giving a result that is likely to be wrong and may result in data corruption when the file is saved out again. All coding systems are categorized according to their type. Currently this includes ISO2022, Big 5, Shift-JIS, UTF8 and a few others. In the future there will be many more types defined and this mechanism will be generalized so that it is easily extendable by the Lisp programmer. In general, each coding system type defines a series of subtypes which are handled differently for the purpose of detection. For example, ISO 2022 defines many different subtypes such as 7 bit, 8 bit, locking shift, designating and so on. UCS2 may define subtypes such as normal and byte reversed. The detection engine works conceptually by calling the detection methods of all of the defined coding system types in parallel on successive chunks of data (which may, for example, be 4K in size, but where the size makes no difference except for optimization purposes) and watching the results until either a definite answer is determined or the end of data is reached. The way the definite answer is determined will be defined below. The detection method of the coding system type is passed some data and a chunk of memory, which the method uses to store its current state (and which is maintained separately for each coding system type by the detection engine between successive calls to the coding system type's detection method). Its return value should be an alist consisting of a list of all of the defined subtypes for that coding system type along with a level of likelihood and a list of additional properties indicating certain features detected in the data. The extra properties returned are defined entirely by the particular coding system type and are used only in the algorithm described below under "user control." However, the levels of likelihood have a standard meaning as follows: Level 4 means "near certainty" and typically indicates that a signature has been detected, usually at the beginning of the data, indicating that the data is encoded in this particular coding system type. An example of this would be the byte order mark at the beginning of UCS2 encoded data or the GZIP mark at the beginning of GZIP data. Level 3 means "highly likely" and indicates that tell-tale signs have been discovered in the data that are characteristic of this particular coding system type. Examples of this might be ISO 2022 escape sequences or the current Unicode end of line markers at regular intervals. Level 2 means "strongly statistically likely" indicating that statistical analysis concludes that there's a high chance that this data is encoded according to this particular type. For example, this might mean that for UCS2 data, there is a high proportion of null bytes or other repeated bytes in the odd-numbered bytes of the data and a high variance in the even-numbered bytes of the data. For Shift-JIS, this might indicate that there were no illegal Shift-JIS sequences and a fairly high occurrence of common Shift-JIS characters. Level 1 means "weak statistical likelihood" meaning that there is some indication that the data is encoded in this coding system type. In fact, there is a reasonable chance that it may be some other type as well. This means, for example, that no illegal sequences were encountered and at least some data was encountered that is purposely not in other coding system types. For Shift-JIS data, this might mean that some bytes in the range 128 to 159 were encountered in the data. Level 0 means "neutral" which is to say that there's either not enough data to make any decision or that the data could well be interpreted as this type (meaning no illegal sequences), but there is little or no indication of anything particular to this particular type. Level -1 means "weakly unlikely" meaning that some data was encountered that could conceivably be part of the coding system type but is probably not. For example, successively long line-lengths or very rarely-encountered sequences. Level -2 means "strongly unlikely" meaning that typically a number of illegal sequences were encountered. The algorithm to determine when to stop and indicate that the data has been detected as a particular coding system uses a priority list, which is typically specified as part of the language environment determined from the current locale or the user's choice. This priority list consists of a list of coding system subtypes, along with a minimum level required for positive detection and optionally additional properties that need to be present. Using the return values from all of the detection methods called, the detection engine looks through this priority list until it finds a positive match. In this priority list, along with each subtype is a particular coding system to return when the subtype is encountered. (For example, in a Japanese-language environment particular subtypes of ISO 2022 will be associated with the Japanese coding system version of those subtypes). It is perfectly legal and quite common in fact, to list the same subtype more than once in the priority list with successively lower requirements. Other facts that can be listed in the priority list for a subtype are "reject", meaning that the data should never be detected as this subtype, or "ask", meaning that if the data is detected to be this subtype, the user will be asked whether they actually mean this. This latter property could be used, for example, towards the bottom of the priority list. In addition there is a global variable which specifies the minimum number of characters required before any positive match is reported. There may actually be more than one such variable for different sources of data, for example, detection of files versus detection of subprocess data. Whenever a file is opened and detected to be a particular coding system, the subtype, the coding system and the associated level of likelihood will be prominently displayed either in the echo area or in a status box somewhere. If no positive match is found according to the priority list, or if the matches that are found have the "ask" property on them, then the user will be presented with a list of choices of possible encodings and asked to choose one. This list is typically sorted first by level of likelihood, and then within this, by the order in which the subtypes appear in the priority list. This list is displayed in a special kind of dialog box or other buffer allowing the user, in addition to just choosing a particular encoding, to view what the file would look like if it were decoded according to the type. Furthermore, whenever a file is decoded according to a particular type, the decoding engine keeps track of status values that are output by the coding system type's decoding method. Generally, this status will be in the form of errors or warnings of various levels, some of which may be severe enough to stop the decoding entirely, and some of which may either indicate definitely malformed data but from which it's possible to recover, or simply data that appears rather questionable. If any of these status values are reported during decoding, the user will be informed of this and asked "are you sure?" As part of the "are you sure" dialog box or question, the user can display the results of the decoding to make sure it's correct. If the user says "no, they're not sure," then the same list of choices as previously mentioned will be presented. @subheading RFC: Autodetection Also appeared under heading "Implementation of Coding System Priority Lists in Various Locales" ? Author: @uref{mailto:stephen@@xemacs.org,Stephen Turnbull} Date: 11/1/1999 2:48 AM @example >>>>> "Hrvoje" == Hrvoje Niksic <hniksic@@srce.hr> writes: [Ben sez:] >> You are perfectly free to set up your XEmacs like this, but >> XEmacs/Mule @strong{will} autodetect by default if there is no >> Content-Type: info and no reason to believe we are dealing with >> binary files. Hrvoje> In that case, it will be a serious mistake to make Hrvoje> --with-mule the default, ever. I think more care should Hrvoje> be shown in meeting the need of European users. @end example Hrvoje, I don't understand what you are worrying about. I suspect you are worrying about Handa's hyperactive and obstinate Mule, not what Ben has in mind. Yes, Ben has said "better guessing," but that's simply not reasonable without substantial language environment information. I think trying to detect Latin-1 vs Latin-2 in the POSIX locale would be a big mistake, I think trying to guess Big 5 v. Shift JIS in a European locale would be a big mistake. If Ben doesn't mean "more appropriate use of language environment information" when he writes "better guessing," I, as much as you, want to see how he plans to do that. Ben? ("Yes/no/oops I need to think about it" is good enough if you have specifics you intend to put in the RFC you're planning to present.) Let me give a formal proposal of what I would like to see in the autodetection specification. @enumerate @item Definitions @enumerate @item @dfn{Autodetection} means detecting and making available to Mule the external file's encoding. See (5), below. It doesn't imply any specific actions based on that information. @item The @dfn{default} case is POSIX locale, and no environment information in ~/.emacs. N.B. This @strong{will} cause breakage for all 1-byte users because the default case can no longer assume Latin-1. You @strong{may} be able to use the TTY font or the Xt -font option to fake this, and default to iso8859-1; I would hope that we would not use such a kludge in the beta versions, although it might be satisfactory for general use. In particular, encodings like VISCII (Vietnamese) and I believe KOI-8 (Cyrillic) are not ISO-2022-clean, but using C1 control characters as a heuristic for detecting binary files is useful. If we do allow it, I think that XEmacs should bitch and warn that the practices of implicitly specifying language environment by -font and defaulting on TTYs is deprecated and likely to be obsoleted. @item The @dfn{European} case is any Latin-* locale, either implied by setlocale() and friends or set in ~/.emacs. Latin-1 is specifically not given precedence over other Latin-*, or non-Latin or non-ISO-8859 for that matter. I suspect but am not sure that this case extends to all ISO-8859 encodings, and possibly to non-ISO-8859 single-byte encodings like KOI-8r (in particular when combined in a class with ISO-8859 encodings). @item The @dfn{CJK} case is any CJK locale. Japanese is specifically not given precedence over other Asian locales. @item For completeness, define the @dfn{Unicode} case (Unicode unfortunately has lots of junk such as precomposed characters, language tags, and directionality indicators in it; we probably don't care yet, but we should also not claim compliance) and the @dfn{general} case (which has a lot of features similar to Unicode, but lacks the advantage of a unified encoding). This proposal has no idea how to handle the special features of these, or even if that matters. The general case includes stuff that nobody here really knows how it works, like Tibetan and Ethiopic. @end enumerate Each of the following cases is given in the order of priority of detection. I'm not sure I'm serious about the top priority given the (optional) Unicode detection. This may be appropriate if Ben is right that ISO-2022 is going to disappear, but possibly not until then (two two-byte sequences out of 65536 is probably 1.99 too many). It probably isn't too risky if (6)(c) is taken pretty seriously; a Unicode file should contain _no_ private use characters unless the encoding is explicitly specified, and that's a block of 1/10 of the code space, which should help a lot in detecting binary files. @item Default locale @enumerate @item Some Unicode (fixed width; maybe UTF-8, too?) may optionally be detected by the byte-order-mark magic (if the first two bytes are 0xFE 0xFF, the file is Unicode text, if 0xFF 0xFE, it is wrong-endian Unicode; if legal in UTF-8, it would be 0xFE 0xBB 0xBF, either-endian). This is probably an optimization that should not be on by default yet. @item ISO-2022 encodings will be detected as long as they use explicit designation of all non-ASCII character sets. This means that many 7-bit ISO-2022 encodings would be detected (eg, ISO-2022-JP), but EUC-JP and X Compound Text would not, because they implicitly designate character sets. N.B. Latin-1 will be detected as binary, as for any Latin-*. N.B. An explicit ISO-2022 designation is semantically equivalent to a Content-Type: header. It is more dangerous because shorter, but I think we should recognize them by default despite the slight risk; XEmacs is a text editor. N.B. This is unlikely to be as dangerous as it looks at first glance. Any file that includes an 8-bit-set byte before the first valid designation should be detected as binary. @item Binary files will be detected (eg, presence of NULs, other non-whitespace control characters, absurdly long lines, and presence of bytes >127). @item Everything else is ASCII. @item Newlines will be detected in text files. @end enumerate @item European locales @enumerate @item Unicode may optionally be detected by the byte-order-mark magic. @item ISO-2022 encodings will be detected as long as they use explicit designation of all non-ASCII character sets. @item A locale-specific class of 1-byte character sets (eg, '(Latin-1)) will be detected. N.B. The reason for permitting a class is for cases like Cyrillic where there are both ISO-8859 encodings and incompatible encodings (KOI-8r) in common use. If you want to write a Latin-1 v. Latin-2 detector, be my guest, but I don't think it would be easy or accurate. @item Binary files will be detected per (2)(c), except that only 8-bit bytes out of the encoding's range imply binary. @item Everything else is ASCII. @item Newlines will be detected in text files. @end enumerate @item CJK locales @enumerate @item Unicode may optionally be detected by the byte-order-mark magic. @item ISO-2022 encodings will be detected as long as they use explicit designation of all non-ASCII character sets. @item A locale-specific class of multi-byte and wide-character encodings will be detected. N.B. No 1-byte character sets (eg, Latin-1) will be detected. The reason for a class is to allow the Japanese to let Mule do the work of choosing EUC v. SJIS. @item Binary files will be detected per (3)(d). @item Everything else is ASCII. @item Newlines will be detected in text files. @end enumerate @item Unicode and general locales; multilingual use @enumerate @item Hopefully a system general enough to handle (2)--(4) will handle these, too, but we should watch out for gotchas like Unicode "plane 14" tags which (I think _both_ Ben and Olivier will agree) have no place in the internal representation, and thus must be treated as out-of-band control sequences. I don't know if all such gotchas will be as easy to dispose of. @item An explicit coding system priority list will be provided to allow multilingual users to autodetect both Shift JIS and Big 5, say, but this ability is not promised by Mule, since it would involve (eg) heuristics like picking a set of code points that are frequent in Shift JIS and uncommon in Big 5 and betting that a file containing many characters from that set is Shift JIS. @end enumerate @item Relationship to decoding semantics @enumerate @item Autodetection should be run on every input stream unless the user explicitly disables it. @item The (conceptual) default procedure is @item Read the file into the buffer Announce the result of autodetection to the user. User may request decoding, with autodetected encoding(s) given priority in a list of available encodings. zations (see (e) below) should avoid introducing data tion that this default procedure would avoid. sly, it can't be perfect if any autodecoding is done; like Hrvoje should have an easily available option to to this default (or an optimized approximation which t actually read the whole file into a buffer) or simply y everything as binary (with the "font" for binary files a user option). @item This implies that we should be detecting conditions in the tail of the file which violate the implicit assumptions of the coding system autodetected (eg, in UTF-8 illegal UTF-8 sequences, including those corresponding to surrogates) should raise a warning; the buffer should probably be made read-only and the user prompted. This could be taken to extremes, like checking by table whether all characters in a Japanese file are actually legitimate JIS codes; that's insane (and would cause corporate encodings to be recognized as binary). But we should think about the idea that autodetection shouldn't mean XEmacs can't change its mind. @item A flexible means for the user to delegate the decision (conditional on the result of autodetection) to decode or not to XEmacs or a Lisp program should be provided (eg, the coding priority list and/or a file-coding-alist). @item Optimized operations (eg, the current lstreams) should be provided, with the recognition that if they depend on sampling the file they are risky. @item Mule should provide a reasonable set of default delegations (as in (d) above) for as many locales as possible. @end enumerate @item Implementation @enumerate @item I think all the decision logic suggested above can be accomplished through a coding-priority-list and appropriate initializations for different language environments, and a file-coding-alist. @item Many of the tests on the file's tail shouldn't be very expensive; in particular, all of the ones I've suggested are O(n) although they might involve moderate-sized auxiliary tables for efficiency (eg, 64kB for a single Unicode-oriented test). @end enumerate @end enumerate Other comments: It might be reasonable given Hrvoje's objections to require that any autodetection that could cause data loss (any coding system that involves escape sequences, and only those AFAIK: by design translation to Unicode is invertible) by default prompt the user (presumable with a novice-like ability to retain the prompt, always default to binary, or always default to the autodetected encoding) in the future, at least in locales that don't need it (POSIX, Latin-any). Ben thinks that we can remember the input data; I think it's going to be hard to comprehensively test that a highly optimized version works. Good design will help, but ISO-2022 is enormously complex, and there are many encodings that violate even its lax assumptions. On the other hand, memory is the only way to get non-rewindable streams right. Hrvoje himself said he would like to have an XEmacs that distinguishes between Latin-1 and Latin-2 text. Where it is possible to do that, this is exactly what autodetection of ISO-2022 and Unicode gives you. Many people would want that, even at some risk of binary corruption. >> Once again I remind you that XEmacs is a @strong{text} editor. There >> are lots of files that potentially may have Japanese etc. in >> them without this marked, e.g. C or Elisp files in the XEmacs >> source. Surely you're not arguing that we interpret even these >> files as binary by default? Hrvoje> I am. If I want to see Japanese, I'll setup my Hrvoje> environment that way. But I don't, and neither do 99% of Hrvoje> Croatian users. I can't speak for French, Italian, and Hrvoje> others, but I'd assume similar. Hrvoje> If there is Japanese in the source files, I will see it as Hrvoje> escape sequences, which is perfectly fine, because I don't Hrvoje> read Japanese. And some (European) people will have their terminals scrambled, because Shift-JIS contains sequences that can change the state of XTerm (as do fixed-width Unicode and Big5). This may also be a problem with some Windows-12xx encodings; I'm not sure they all are ISO-2022-clean. (This isn't a problem for XEmacs native X11 frames or native MS-Windows frames, and the XEmacs sources themselves are all in 7-bit ISO-2022 now IIRC. But it is a potential source of great frustration for many users.) I think that should be considered too, although it is presumably lower priority than the data corruption of binary files. @subheading Response to RFC: Autodetection Author: @uref{mailto:ben@@xemacs.org,Ben Wing} Date: 11/1/1999 7:24 AM Stephen, thank you very much for writing this up. I think it is a good start, and definitely moving in the direction I would like to see things going: more proposals, less arguing. (aka "more light, less heat") However, I have some suggestions for cleaning this up: You should try to make it more layered. For example, you might have one section devoted to the workings of autodetection, which starts out like this (the section numbers below are totally arbitrary): @subsubheading Section 5 @code{Autodetect()} is a function whose arguments are (1) a readable stream, (2) some hints indicating how the autodetection is to proceed, and (3) a value indicating the maximum number of characters to examine at the beginning of the stream. (Possibly, the value in (3) may be some special symbol indicating that we only go as far as the next line, or a certain number of lines ahead; this would be used as part of "continuous autodetection", e.g. we are decoding the results of an interactive terminal session, where the user may periodically switch encodings, line terminations, etc. as different programs get run and/or telnet or similar sessions are entered into and exited.) We assume the stream is rewindable; if not, insert a "rewinding" stream in front of the non-rewinding stream; this kind of stream automatically buffers the data as necessary. [You can use pseudo-code terminology here. No need for straight C or ELisp.] [Then proceed to describe what the hints look like -- e.g. you could portray it as a property list or whatever. The idea is that, for each locale, there is a corresponding hints value that is used at least by default. The hints structure also has to be set up to allow for two or more competing hints specifications to be merged together. For example, the extension of a file might provide an additional hint or hints about how to interpret the data of that file, and the caller of @code{autodetect()}, when calling @code{autodetect()} on such a file, would need to have a way of gracefully merging the default hints corresponding to the locale with the more specific hints provided by the extension. Furthermore, users like Hrvoje might well want to provide their own hints to supplement and override parts of the generic hints -- e.g. "I don't ever want to see non-European encodings decoded; treat them as binary instead".] [Then describe algorithmically how the autodetection works. First, you could describe it more generally, i.e. presenting an algorithmic overview, then you could discuss in detail exactly how autodetection of a particular type of external encoding works -- e.g. "for iso2022, we first look for an escape character, followed by a byte in this range [. ... .] etc."] @subsubheading Section 6 This section describes the concept of a locale in XEmacs, and how it is derived from the user's environment. A locale in XEmacs is a pair, a country and a language, together determining the handling of locale-specific areas of XEmacs. All locale-specific areas in XEmacs make use of this XEmacs locale, and do not attempt to derive the locale from any other sources. The user is free to change the current locale at any time; accessor and mutator functions are provided to do this so that various locale-specific areas can optionally be changed together with it. [Then you describe how the XEmacs locale is extracted from .emacs, from @code{setlocale()}, from the LANG environment variables, from -font, or wherever else. All other sections assume this dirty work is done and never even mention it] @subsubheading Section 7 [Here you describe the default @code{autodetect()} hints value corresponding to each possible locale. You should probably use a schematic description here, e.g. an actual Lisp property list, liberally commented.] @subsubheading Section 8 etc. [Other sections cover anything I've missed. By being very careful to separate out the layers, you simultaneously introduce more rigor (easier to catch bugs) and make it easier for someone else to understand it completely.] @subheading Better Algorithm, More Flexibility, Different Levels of Certainty @subheading Much More Flexible Coding System Priority List, per-Language Environment @subheading User Ability to Select Encoding when System Unsure or Encounters Errors @subheading Another Autodetection Proposal Author: @uref{mailto:ben@@xemacs.org,Ben Wing} however, in general the detection code has major problems and needs lots of work: @itemize @bullet @item instead of merely "yes" or "no" for particular categories, we need a more flexible system, with various levels of likelihood. Currently I've created a system with six levels, as follows: [see @file{file-coding.h}] Let's consider what this might mean for an ASCII text detector. (In order to have accurate detection, especially given the iteration I proposed below, we need active detectors for @strong{all} types of data we might reasonably encounter, such as ASCII text files, binary files, and possibly other sorts of ASCII files, and not assume that simply "falling back to no detection" will work at all well.) An ASCII text detector DOES NOT report ASCII text as level 0, since that's what the detector is looking for. Such a detector ideally wants all bytes in the range 0x20 - 0x7E (no high bytes!), except for whitespace control chars and perhaps a few others; LF, CR, or CRLF sequences at regular intervals (where "regular" might mean an average < 100 chars and 99% < 300 for code and other stuff of the "text file w/line breaks" variety, but for the "text file w/o line breaks" variety, excluding blank lines, averages could easily be 600 or more with 2000-3000 char "lines" not so uncommon); similar statistical variance between odds and evens (not Unicode); frequent occurrences of the space character; letters more common than non-letters; etc. Also checking for too little variability between frequencies of characters and for exclusion of particular characters based on character ranges can catch ASCII encodings like base-64, UUEncode, UTF-7, etc. Granted, this doesn't even apply to everything called "ASCII", and we could potentially distinguish off ASCII for code, ASCII for text, etc. as separate categories. However, it does give us a lot to work off of, in deciding what likelihood to choose -- and it shows there's in fact a lot of detectable patterns to look for even in something seemingly so generic as ASCII. The detector would report most text files in level 1 or level 2. EUC encodings, Shift-JIS, etc. probably go to level -1 because they also pass the EOL test and all other tests for the ASCII part of the text, but have lots of high bytes, which in essence turn them into binary. Aberrant text files like something in BASE64 encoding might get placed in level 0, because they pass most tests but fail dramatically the frequency test; but they should not be reported as any lower, because that would cause explicit prompting, and the user should be able any valid text file without prompting. The escape sequences and the base-64-type checks might send 7-bit iso2022 to 0, but probably not -1, for similar reasons. @item The assumed algorithm for the above detection levels is to in essence sort categories first by detection level and then by priority. Perhaps, however, we would want smarter algorithms, or at least something user-controllable -- in particular, when (other than no category at level 0 or greater) do we prompt the user to pick a category? @item Improvements in how the detection algorithm works: we want to handle lots of different ways something could be encoded, including multiple stacked encodings. trying to specify a series of detection levels (check for base64 first, then check for gzip, then check for an i18n decoding, then for crlf) won't generally work. for example, what about the same encoding appearing more than once? for example, take euc-jp, base64'd, then gzip'd, then base64'd again: this could well happen, and you could specify the encodings specifically as base64|gzip|base64|euc-jp, but we'd like to autodetect it without worrying about exactly what order these things appear in. we should allow for iterating over detection/decoding cycles until we reach some maximum (we got stuck in a loop, due to incorrect category tables or detection algorithms), have no reported detection levels over -1, or we end up with no change after a decoding pass (i.e. the coding system associated with a chosen category was @code{no-conversion} or something equivalent). it might make sense to divide things into two phases (internal and external), where the internal phase has a separate category list and would probably mostly end up handling EOL detection; but the i think about it, the more i disagree. with properly written detectors, and properly organized tables (in general, those decodings that are more "distinctive" and thus detectable with greater certainty go lower on the list), we shouldn't need two phases. for example, let's say the example above was also in CRLF format. The EOL detector (which really detects *plain text* with a particular EOL type) would return at most level 0 for all results until the text file is reached, whereas the base64, gzip or euc-jp decoders will return higher. Once the text file is reached, the EOL detector will return 0 or higher for the CRLF encoding, and all other detectors will return 0 or lower; thus, we will successfully proceed through CRLF decoding, or at worst prompt the user. (The only external-vs-internal distinction that might make sense here is to favor coding systems of the correct source type over those that require conversion between external and internal; if done right, this could allow the CRLF detector to return level 1 for all CRLF-encoded text files, even those that look like Base-64 or similar encoding, so that CRLF encoding will always get decoded without prompting, but not interfere with other decoders. On the other hand, this external-vs-internal distinction may not matter at all -- with automatic internal-external conversion, CRLF decoding can occur before or after decoding of euc-jp, base64, iso2022, or similar, without any difference in the final results.) #### What are we trying to say? In base64, the CRLF decoding before base64 decoding is irrelevant, they will be thrown out as whitespace is not significant in base64. [sjt considers all of this to be rather bogus. Ideas like "greater certainty" and "distinctive" can and should be quantified. The issue of proper table organization should be a question of optimization.] [sjt wonders if it might not be a good idea to use Unicode's newline character as the internal representation so that (for non-Unicode coding systems) we can catch EOL bugs on Unix too.] @item There need to be two priority lists and two category->coding-system lists. Once is general, the other category->langenv-specific. The user sets the former, the langenv category->the latter. The langenv-specific entries take precedence category->over the others. This works similarly to the category->category->Unicode charset priority list. @item The simple list of coding categories per detectors is not enough. Instead of coding categories, we need parameters. For example, Unicode might have separate detectors for UTF-8, UTF-7, UTF-16, and perhaps UCS-4; or UTF-16/UCS-4 would be one detection type. UTF-16 would have parameters such as "little-endian" and "needs BOM", and possibly another one like "collapse/expand/leave alone composite sequences" once we add this support. Usually these parameters correspond directly to a coding system parameter. Different likelihood values can be specified for each parameter as well as for the detection type as a whole. The user can specify particular coding systems for a particular combination of detection type and parameters, or can give "default parameters" associated with a detection type. In the latter case, we create a new coding system as necessary that corresponds to the detected type and parameters. @item a better means of presentation. rather than just coming up with the new file decoded according to the detected coding system, allow the user to browse through the file and conveniently reject it if it looks wrong; then detection starts again, but with that possibility removed. in cases where certainty is low and thus more than one possibility is presented, the user can browse each one and select one or reject them all. @item fail-safe: even after the user has made a choice, if they later on realize they have the wrong coding system, they can go back, and we've squirreled away the original data so they can start the process over. this may be tricky. @item using a larger buffer for detection. we use just a small piece, which can give quite random results. we may need to buffer up all the data we look through because we can't necessarily rewind. the idea is we proceed until we get a result that's at least at a certain level of certainty (e.g. "probable") or we reached a maximum limit of how much we want to buffer. @item dealing with interactive systems. we might need to go ahead and present the data before we've finished detection, and then re-decode it, perhaps multiple times, as we get better detection results. @item Clearly some of these are more important than others. at the very least, the "better means of presentation" should be implemented as soon as possible, along with a very simple means of fail-safe whenever the data is readibly available, e.g. it's coming from a file, which is the most common scenario. @end itemize ben [at least that's what sjt thinks] ***** Author: @uref{mailto:stephen@@xemacs.org,Stephen Turnbull} While this is clearly something of an improvement over earlier designs, it doesn't deal with the most important issue: to do better than categories (which in the medium term is mostly going to mean "which flavor of Unicode is this?"), we need to look at statistical behavior rather than ruling out categories via presence of specific sequences. This means the stream processor should @enumerate @item keep octet distributions (octet, 2-, 3-, 4- octet sequences) @item in some kind of compressed form @item look for "skip features" (eg, characteristic behavior of leading bytes for UTF-7, UTF-8, UTF-16, Mule code) @item pick up certain "simple" regexps @item provide "triggers" to determine when statistical detectors should be invoked, such as octet count @item and "magic" like Unicode signatures or file(1) magic. @end enumerate --sjt @node Future Work -- Conversion Error Detection, Future Work -- Unicode, Future Work -- Autodetection, Future Work -- Byte Code Snippets @subsection Future Work -- Conversion Error Detection @cindex future work, conversion error detection @cindex conversion error detection, future work @subheading "No Corruption" Scheme for Preserving External Encoding when Non-Invertible Transformation Applied Author: @uref{mailto:ben@@xemacs.org,Ben Wing} A preliminary and simple implementation is: @quotation But you could implement it much more simply and usefully by just determining, for any text being decoded into mule-internal, can we go back and read the source again? If not, remember the entire file (GNUS message, etc) in text properties. Then, implement the UI interface (like Netscape's) on top of that. This way, you have something that at least works, but it might be inefficient. All we would need to do is work on making the underlying implementation more efficient. @end quotation A more detailed proposal for avoiding binary file corruption is @quotation Basic idea: A coding system is a filter converting an entire input stream into an output stream. The resulting stream can be said to be "correspondent to" the input stream. Similarly, smaller units can correspond. These could potentially include zero width intervals on either side, but we avoid this. Specifically, the coding system works like: @example loop (input) @{ Read bytes till we have enough to generate a translated character or a chars. This establishes a "correspondence" between the whole input and output more or less in minimal chunks. @} @end example We then do the following processing: @enumerate @item Eliminate correspondences where one or the other of the I/O streams has a zero interval by combining with an adjacent interval; @item Group together all adjacent "identity" correspondences into as large groups as possible; @item Use text properties to store the non-identity correspondences on the characters. For identity correspondences, use a simple text property on all that contains no data but just indicates that the whole string of text is identity corresponded. (How do we define "identity"? Latin 1 or could it be something else? For example, Latin 2)? @item Figure out the procedures when text is inserted/deleted and copied or pasted. @item Figure out to save the file out making use of the correspondences. Allow ways of saving without correspondences, and doing a "save to buffer with and without correspondences." Need to be clever when dealing with modal coding systems to parse the correspondences to get the internal state right. @end enumerate @end quotation @subheading Another Error-Catching Idea Author: @uref{mailto:ben@@xemacs.org,Ben Wing} Nov 4, 1999 Finally, I don't think "save the input" is as hard as you make it out to be. Conceptually, in fact, it's simple: for each minimal group of bytes where you cannot absolutely guarantee that an external->internal transformation is reversible, you put a text property on the corresponding internal character indicating the bytes that generated this character. We also put a text property on every character, indicating the coding system that caused the transformation. This latter text property is extremely efficient (e.g. in a buffer with no data pasted from elsewhere, it will map to a single extent over all the buffer), and the former cases should not be prevalent enough to cause a lot of inefficiency, esp. if we define what "reversible" means for each coding system in such a way that it correctly handles the most common cases. The hardest part, in fact, is making all the string/text handling in XEmacs be robust w.r.t. text properties. @subheading Strategies for Error Annotation and Coding Orthogonalization Author: @uref{mailto:stephen@@xemacs.org,Stephen Turnbull} We really want to separate out a number of things. Conceptually, there is a nested syntax. At the top level is the ISO 2022 extension syntax, including charset designation and invocation, and certain auxiliary controls such as the ISO 6429 direction specification. These are octet-oriented, with the single exception (AFAIK) of the "exit Unicode" sequence which uses the UTF's natural width (1 byte for UTF-7 and UTF-8, 2 bytes for UCS-2 and UTF-16, and 4 bytes for UCS-4 and UTF-32). This will be treated as a (deprecated) special case in Unicode processing. The middle layer is ISO 2022 character interpretation. This will depend on the current state of the ISO 2022 registers, and assembles octets into the character's internal representation. The lowest level is translating system control conventions. At present this is restricted to newline translation, but one could imagine doing tab conversion or line wrapping here. "Escape from Unicode" processing would be done at this level. At each level the parser will verify the syntax. In the case of a syntax error or warning (such as a redundant escape sequence that affects no characters), the parser will take some action, typically inserting the erroneous octets directly into the output and creating an annotation which can be used by higher level I/O to mark the affected region. This should make it possible to do something sensible about separating newline convention processing from character construction, and about preventing ISO 2022 escape sequences from being recognized inappropriately. The basic strategy will be to have octet classification tables, and switch processing according to the table entry. It's possible that, by doing the processing with tables of functions or the like, the parser can be used for both detection and translation. @subheading Handling Writing a File Safely, Without Data Loss Author: @uref{mailto:ben@@xemacs.org,Ben Wing} @quotation When writing a file, we need error detection; otherwise somebody will create a Unicode file without realizing the coding system of the buffer is Raw, and then lose all the non-ASCII/Latin-1 text when it's written out. We need two levels @enumerate @item first, a "safe-charset" level that checks before any actual encoding to see if all characters in the document can safely be represented using the given coding system. FSF has a "safe-charset" property of coding systems, but it's stupid because this information can be automatically derived from the coding system, at least the vast majority of the time. What we need is some sort of alternative-coding-system-precedence-list, langenv-specific, where everything on it can be checked for safe charsets and then the user given a list of possibilities. When the user does "save with specified encoding", they should see the same precedence list. Again like with other precedence lists, there's also a global one, and presumably all coding systems not on other list get appended to the end (and perhaps not checked at all when doing safe-checking?). safe-checking should work something like this: compile a list of all charsets used in the buffer, along with a count of chars used. that way, "slightly unsafe" coding systems can perhaps be presented at the end, which will lose only a few characters and are perhaps what the users were looking for. [sjt sez this whole step is a crock. If a universal coding system is unacceptable, the user had better know what he/she is doing, and explicitly specify a lossy encoding. In principle, we can simply check for characters being writable as we go along. Eg, via an "unrepresentable character handler." We still have the buffer contents. If we can't successfully save, then ask the user what to do. (Do we ever simply destroy previous file version before completing a write?)] @item when actually writing out, we need error checking in case an individual char in a charset can't be written even though the charsets are safe. again, the user gets the choice of other reasonable coding systems. [sjt -- something is very confused, here; safe charsets should be defined as those charsets all of whose characters can be encoded.] @item same thing (error checking, list of alternatives, etc.) needs to happen when reading! all of this will be a lot of work! @end enumerate @end quotation Author: @uref{mailto:stephen@@xemacs.org,Stephen Turnbull} I don't much like Ben's scheme. First, this isn't an issue of I/O, it's a coding issue. It can happen in many places, not just on stream I/O. Error checking should take place on all translations. Second, the two-pass algorithm should be avoided if possible. In some cases (eg, output to a tty) we won't be able to go back and change the previously output data. Third, the whole idea of having a buffer full of arbitrary characters which we're going to somehow shoehorn into a file based on some twit user's less than informed idea of a coding system is kind of laughable from the start. If we're going to say that a buffer has a coding system, shouldn't we enforce restrictions on what you can put into it? Fourth, what's the point of having safe charsets if some of the characters in them are unsafe? Fifth, what makes you think we're going to have a list of charsets? It seems to me that there might be reasons to have user-defined charsets (eg, "German" vs "French" subsets of ISO 8859/15). Sixth, the idea of having language environment determine precedence doesn't seem very useful to me. Users who are working with a language that corresponds to the language environment are not going to run into safe charsets problems. It's users who are outside of their usual language environment who run into trouble. Also, the reason for specifying anything other than a universal coding system is normally restrictions imposed by other users or applications. Seventh, the statistical feedback isn't terribly useful. Users rarely "want" a coding system, they want their file saved in a useful way. We could add a FORCE argument to conversions for those who really want a specific coding system. But mostly, a user might want to edit out a few unsafe characters. So (up to some maximum) we should keep a list of unsafe text positions, and provide a convenient function for traversing them. --sjt @node Future Work -- Unicode, Future Work -- BIDI Support, Future Work -- Conversion Error Detection, Future Work -- Byte Code Snippets @subsection Future Work -- Unicode @cindex future work, unicode @cindex unicode, future work Author: @uref{mailto:ben@@xemacs.org,Ben Wing} Following is an old proposal. Unicode has been implemented already, in a different fashion; but there are some ideas here for more general support, e.g. properties of Unicode characters other than their mappings to particular charsets. We recognize 128, [256], 128x128, [256x256] for source charsets; for Unicode, 256x256 or 16x256x256. In all cases, use tables of tables and substitute a default subtable if entire row is empty. If destination is Unicode, either 16 or 32 bits. If destination is charset, either 8 or 16 bits. For the moment, since we only do 94, 96, 94x94 or 96x96, only do 128 or 128x128 for source charsets and use the range 33-126 or 32-127. (Except ASCII - we special case that and have no table because we can algorithmically translate) Also have a 16x256x256 table -> 32 bits of Unicode char properties. A particular charset contains two associated mapping tables, for both directions. API is set-unicode-mapping: @example (set-unicode-mapping unicode char unicode charset-code charset-offset unicode vector of char unicode list of char unicode string of char unicode vector or list of codes charset-offset @end example Establishes a mapping between a unicode codepoint (an integer) and one or more chars in a charset. The mapping is automatically established in both directions. Chars in a charset can be specified either with an actual character or a codepoint (i.e. an integer) and the charset it's within. If a sequence of chars or charset points is given, multiple mappings are established for consecutive unicode codepoints starting with the given one. Charset codepoints are specified as most-significant x 256 + least significant, with both bytes in the range 33-126 (for 94 or 94x94) or 32-127 (for 96 or 96x96), unless an offset is given, which will be subtracted from each byte. (Most common values are 128, for codepoints given with the high bit set, or -32, for codepoints given as 1-94 or 0-95.) Other API's: @example (write-unicode-mapping file charset) @end example Write the mapping table for a particular charset to the specified file. The tables are written in an internal format that allows for efficient loading, for portability across platforms and XEmacs invocations, for conserving space, for appending multiple tables one directly after another with no need for a directory anywhere in the file, and for reorganizing a file as in this format (with a magic sequence at the beginning). The data will be appended at the end of a file, so that multiple tables can be written to a file; remove the file first to avoid this. @example (write-unicode-properties file unicode-codepoint length) @end example Write the Unicode properties (not including charset mappings) for the specified range of contiguous Unicode codepoints to the end of the file (i.e. append mode) in a binary format similar to what was mentioned in the write-unicode-mapping description and with the same features. Extension to set-unicode-mapping: @example (set-unicode-mapping list-or-vector-of-unicode-codepoints char "" charset-code charset-offset "" sequence of char "" list-or-vector-of-codes charset-offset @end example The first two forms are conceptually the inverse of the forms above to specify characters for a contiguous range of Unicode codepoints. These new forms let you specify the Unicode codepoints for a contiguous range of chars in a charset. "Contiguous" here means that if we run off the end of a row, we go to the first entry of the next row, rather than to an invalid code point. For example, in a 94x94 charset, valid rows and columns are in the range 0x21-0x7e; after 0x457c 0x457d 4x457e goes 0x4621, not something like 0x457f, which is invalid. The final two forms are the most general, letting you specify an arbitrary set of both Unicode points and charset chars, and the two are matched up just like a series of individual calls. However, if the lists or vectors do not have the same length, an error is signaled. @example (load-unicode-mapping file &optional charset) @end example If charset is omitted, loads all charset mapping tables found and returns a list of the charsets found. If charset is specified, searches through the file for the appropriate mapping tables. (This is extremely fast because each entry in the file gives an offset to the next one). Returns t if found. @example (load-unicode-properties file unicode-codepoint) @end example @example (list-unicode-entries file) @end example @example (autoload-unicode-mapping charset) @end example ... (unfinished) @node Future Work -- BIDI Support, Future Work -- Localized Text/Messages, Future Work -- Unicode, Future Work -- Byte Code Snippets @subsection Future Work -- BIDI Support @cindex future work, bidi support @cindex bidi support, future work Author: @uref{mailto:ben@@xemacs.org,Ben Wing} @enumerate @item Use text properties to handle nesting levels, overrides BIDI-specific text properties (as per Unicode BIDI algorithm) computed at text insertion time. @item Lisp API for reordering a display line at redisplay time, possibly substitution of different glyphs (esp. mirroring of glyphs). @item Lisp API called after a display line is laid out, but only when reordering may be necessary (display engine checks for non-uniform BIDI text properties; can handle internally a line that's completely in one direction) @item Default direction is a buffer-local variable @item We concentrate on implementing Unicode BIDI algorithm. @item Display support for mirroring of entire window @item Display code keeps track of mirroring junctures so it can display double cursor. @item Entire layout of screen (on a per window basis) is exported as a Lisp API, for visual editing (also very useful for other purposes e.g. proper handling of word wrapping with proportional fonts, complex Lisp layout engines e.g. W3) @item Logical, visual, etc. cursor movement handled entirely in Lisp, using aforementioned API, plus a specifier for controlling how cursor is shown (e.g. split or not). @end enumerate @node Future Work -- Localized Text/Messages, , Future Work -- BIDI Support, Future Work -- Byte Code Snippets @subsection Future Work -- Localized Text/Messages @cindex future work, localized text/messages @cindex localized text/messages, future work NOTE: There is existing message translation in X Windows of menu names. This is handled through X resources. The files are in @file{PACKAGES/mule-packages/locale/app-defaults/LOCALE/Emacs}, where @var{locale} is @samp{ja}, @samp{fr}, etc. See lib-src/make-msgfile.lex. Long comment from jwz, some additions from ben marked "ben": (much of this comment is outdated, and a lot of it is actually implemented) @subsection Proposal for How This All Ought to Work Author: @uref{mailto:jwz@@jwz.org,Jamie Zawinski} this isn't implemented yet, but this is the plan-in-progress In general, it's accepted that the best way to internationalize is for all messages to be referred to by a symbolic name (or number) and come out of a table or tables, which are easy to change. However, with Emacs, we've got the task of internationalizing a huge body of existing code, which already contains messages internally. For the C code we've got two options: @itemize @bullet @item Use a Sun-like @code{gettext()} form, which takes an "english" string which appears literally in the source, and uses that as a hash key to find a translated string; @item Rip all of the strings out and put them in a table. @end itemize In this case, it's desirable to make as few changes as possible to the C code, to make it easier to merge the code with the FSF version of emacs which won't ever have these changes made to it. So we should go with the former option. The way it has been done (between 19.8 and 19.9) was to use @code{gettext()}, but @strong{also} to make massive changes to the source code. The goal now is to use @code{gettext()} at run-time and yet not require a textual change to every line in the C code which contains a string constant. A possible way to do this is described below. (@code{gettext()} can be implemented in terms of @code{catgets()} for non-Sun systems, so that in itself isn't a problem.) For the Lisp code, we've got basically the same options: put everything in a table, or translate things implicitly. Another kink that lisp code introduces is that there are thousands of third- party packages, so changing the source for all of those is simply not an option. Is it a goal that if some third party package displays a message which is one we know how to translate, then we translate it? I think this is a worthy goal. It remains to be seen how well it will work in practice. So, we should endeavor to minimize the impact on the lisp code. Certain primitive lisp routines (the stuff in lisp/prim/, and especially in @file{cmdloop.el} and @file{minibuf.el}) may need to be changed to know about translation, but that's an ideologically clean thing to do because those are considered a part of the emacs substrate. However, if we find ourselves wanting to make changes to, say, RMAIL, then something has gone wrong. (Except to do things like remove assumptions about the order of words within a sentence, or how pluralization works.) There are two parts to the task of displaying translated strings to the user: the first is to extract the strings which need to be translated from the sources; and the second is to make some call which will translate those strings before they are presented to the user. The old way was to use the same form to do both, that is, @code{GETTEXT()} was both the tag that we searched for to build a catalog, and was the form which did the translation. The new plan is to separate these two things more: the tags that we search for to build the catalog will be stuff that was in there already, and the translation will get done in some more centralized, lower level place. This program (@file{make-msgfile.c}) addresses the first part, extracting the strings. For the emacs C code, we need to recognize the following patterns: @example message ("string" ... ) error ("string") report_file_error ("string" ... ) signal_simple_error ("string" ... ) signal_simple_error_2 ("string" ... ) build_translated_string ("string") #### add this and use it instead of @code{build_string()} in some places. yes_or_no_p ("string" ... ) #### add this instead of funcalling Qyes_or_no_p directly. barf_or_query_if_file_exists #### restructure this check all callers of Fsignal #### restructure these signal_error (Qerror ... ) #### change all of these to @code{error()} And we also parse out the @code{interactive} prompts from @code{DEFUN()} forms. #### When we've got a string which is a candidate for translation, we should ignore it if it contains only format directives, that is, if there are no alphabetic characters in it that are not a part of a `%' directive. (Careful not to translate either "%s%s" or "%s: ".) @end example For the emacs Lisp code, we need to recognize the following patterns: @example (message "string" ... ) (error "string" ... ) (format "string" ... ) (read-from-minibuffer "string" ... ) (read-shell-command "string" ... ) (y-or-n-p "string" ... ) (yes-or-no-p "string" ... ) (read-file-name "string" ... ) (temp-minibuffer-message "string") (query-replace-read-args "string" ... ) @end example I expect there will be a lot like the above; basically, any function which is a commonly used wrapper around an eventual call to @code{message} or @code{read-from-minibuffer} needs to be recognized by this program. @example (dgettext "domain-name" "string") #### do we still need this? things that should probably be restructured: @code{princ} in @file{cmdloop.el} @code{insert} in @file{debug.el} face-interactive @file{help.el}, @file{syntax.el} all messed up @end example Author: @uref{mailto:ben@@xemacs.org,Ben Wing} ben: (format) is a tricky case. If I use format to create a string that I then send to a file, I probably don't want the string translated. On the other hand, If the string gets used as an argument to (y-or-n-p) or some such function, I do want it translated, and it needs to be translated before the %s and such are replaced. The proper solution here is for (format) and other functions that call gettext but don't immediately output the string to the user to add the translated (and formatted) string as a string property of the object, and have functions that output potentially translated strings look for a "translated string" property. Of course, this will fail if someone does something like @example (y-or-n-p (concat (if you-p "Do you " "Does he ") (format "want to delete %s? " filename)))) @end example But you shouldn't be doing things like this anyway. ben: Also, to avoid excessive translating, strings should be marked as translated once they get translated, and further calls to gettext don't do any more translating. Otherwise, a call like @example (y-or-n-p (format "Delete %s? " filename)) @end example would cause translation on both the pre-formatted and post-formatted strings, which could lead to weird results in some cases (y-or-n-p has to translate its argument because someone could pass a string to it directly). Note that the "translating too much" solution outlined below could be implemented by just marking all strings that don't come from a .el or .elc file as already translated. Menu descriptors: one way to extract the strings in menu labels would be to teach this program about "^(defvar .*menu\n" forms; that's probably kind of hard, though, so perhaps a better approach would be to make this program recognize lines of the form @example "string" ... ;###translate @end example where the magic token ";###translate" on a line means that the string constant on this line should go into the message catalog. This is analogous to the magic ";###autoload" comments, and to the magic comments used in the EPSF structuring conventions. ----- So this program manages to build up a catalog of strings to be translated. To address the second part of the problem, of actually looking up the translations, there are hooks in a small number of low level places in emacs. Assume the existence of a C function gettext(str) which returns the translation of @var{str} if there is one, otherwise returns @var{str}. @itemize @bullet @item @code{message()} takes a char* as its argument, and always filters it through @code{gettext()} before displaying it. @item errors are printed by running the lisp function @code{display-error} which doesn't call @code{message} directly (it princ's to streams), so it must be carefully coded to translate its arguments. This is only a few lines of code. @item @code{Fread_minibuffer_internal()} is the lowest level interface to all minibuf interactions, so it is responsible for translating the value that will go into Vminibuf_prompt. @item Fpopup_menu filters the menu titles through @code{gettext()}. The above take care of 99% of all messages the user ever sees. @item The lisp function temp-minibuffer-message translates its arg. @item query-replace-read-args is funny; it does (setq from (read-from-minibuffer (format "%s: " string) ... )) (setq to (read-from-minibuffer (format "%s %s with: " string from) ... )) @end itemize What should we do about this? We could hack query-replace-read-args to translate its args, but might this be a more general problem? I don't think we ought to translate all calls to format. We could just change the calling sequence, since this is odd in that the first %s wants to be translated but the second doesn't. Solving the "translating too much" problem: The concern has been raised that in this situation: @itemize @bullet @item "Help" is a string for which we know a translation; @item someone visits a file called Help, and someone does something contrived like (error buffer-file-name) @end itemize then we would display the translation of Help, which would not be correct. We can solve this by adding a bit to Lisp_String objects which identifies them as having been read as literal constants from a .el or .elc file (as opposed to having been constructed at run time as it would in the above case.) To solve this: @itemize @bullet @item @code{Fmessage()} takes a lisp string as its first argument. If that string is a constant, that is, was read from a source file as a literal, then it calls @code{message()} with it, which translates. Otherwise, it calls @code{message_no_translate()}, which does not translate. @item @code{Ferror()} (actually, @code{Fsignal()} when condition is Qerror) works similarly. @end itemize More specifically, we do: @quotation Scan specified C and Lisp files, extracting the following messages: @example C files: GETTEXT (...) DEFER_GETTEXT (...) DEFUN interactive prompts Lisp files: (gettext ...) (dgettext "domain-name" ...) (defer-gettext ...) (interactive ...) @end example The arguments given to this program are all the C and Lisp source files of GNU Emacs. .el and .c files are allowed. There is no support for .elc files at this time, but they may be specified; the corresponding .el file will be used. Similarly, .o files can also be specified, and the corresponding .c file will be used. This helps the makefile pass the correct list of files. The results, which go to standard output or to a file specified with -a or -o (-a to append, -o to start from nothing), are quoted strings wrapped in gettext(...). The results can be passed to xgettext to produce a .po message file. However, we also need to do the following: @enumerate @item Definition of Arg below won't handle a generalized argument as might appear in a function call. This is fine for DEFUN and friends, because only simple arguments appear there; but it might run into problems if Arg is used for other sorts of functions. @item @code{snarf()} should be modified so that it doesn't output null strings and non-textual strings (see the comment at the top of @file{make-msgfile.c}). @item parsing of (insert) should snarf all of the arguments. @item need to add set-keymap-prompt and deal with gettext of that. @item parsing of arguments should snarf all strings anywhere within the arguments, rather than just looking for a string as the argument. This allows if statements as arguments to get parsed. @item @code{begin_paren_counting()} et al. should handle recursive entry. @item handle set-window-buffer and other such functions that take a buffer as the other-than-first argument. @item there is a fair amount of work to be done on the C code. Look through the code for #### comments associated with '#ifdef I18N3' or with an I18N3 nearby. @item Deal with @code{get-buffer-process} et al. @item Many of the changes in the Lisp code marked 'rewritten for I18N3 snarfing' should be undone once (5) is implemented. @item Go through the Lisp code in prim and make sure that all strings are gettexted as necessary. This may reveal more things to implement. @item Do the equivalent of (8) for the Lisp code. @item Deal with parsing of menu specifications. @end enumerate @end quotation @node Future Work -- Lisp Stream API, Future Work -- Multiple Values, Future Work -- Byte Code Snippets, Future Work @section Future Work -- Lisp Stream API @cindex future work, Lisp stream API @cindex Lisp stream API, future work Author: @uref{mailto:ben@@xemacs.org,Ben Wing} Expose XEmacs internal lstreams to Lisp as stream objects. (In addition to the functions given below, each stream object has properties that can be associated with it using the standard put, get etc. API. For GNU Emacs, where put and get have not been extended to be general property functions, but work only on strings, we would have to create functions set-stream-property, stream-property, remove-stream-property, and stream-properties. These provide the same functionality as the generic get, put, remprop, and object-plist functions under XEmacs) (Implement properties using a hash table, and @strong{generalize} this so that it is extremely easy to add a property interface onto any kind of object) @example (write-stream STREAM STRING) @end example Write the STRING to the STREAM. This will signal an error if all the bytes cannot be written. @example (read-stream STREAM &optional N SEQUENCE) @end example Reads data from STREAM. N specifies the number of bytes or characters, depending on the stream. SEQUENCE specifies where to write the data into. If N is not specified, data is read until end of file. If SEQUENCE is not specified, the data is returned as a stream. If SEQUENCE is specified, the SEQUENCE must be large enough to hold the data. @example (push-stream-marker STREAM) @end example returns ID, probably a stream marker object @example (pop-stream-marker STREAM) @end example backs up stream to last marker @example (unread-stream STREAM STRING) @end example The only valid STREAM is an input stream in which case the data in STRING is pushed back and will be read ahead of all other data. In general, there is no limit to the amount of data that can be unread or the number of times that unread-stream can be called before another read. @example (stream-available-chars STREAM) @end example This returns the number of characters (or bytes) that can definitely be read from the screen without an error. This can be useful, for example, when dealing with non-blocking streams when an attempt to read too much data will result in a blocking error. @example (stream-seekable-p STREAM) @end example Returns true if the stream is seekable. If false, operations such as seek-stream and stream-position will signal an error. However, the functions set-stream-marker and seek-stream-marker will still succeed for an input stream. @example (stream-position STREAM) @end example If STREAM is a seekable stream, returns a position which can be passed to seek-stream. @example (seek-stream STREAM N) @end example If STREAM is a seekable stream, move to the position indicated by N, otherwise signal an error. @example (set-stream-marker STREAM) @end example If STREAM is an input stream, create a marker at the current position, which can later be moved back to. The stream does not need to be a seekable stream. In this case, all successive data will be buffered to simulate the effect of a seekable stream. Therefore use this function with care. @example (seek-stream-marker STREAM marker) @end example Move the stream back to the position that was stored in the marker object. (this is generally an opaque object of type stream-marker). @example (delete-stream-marker MARKER) @end example Destroy the stream marker and if the stream is a non-seekable stream and there are no other stream markers pointing to an earlier position, frees up some buffering information. @example (delete-stream STREAM N) @end example @example (delete-stream-marker STREAM ID) @end example @example (close-stream stream) @end example Writes any remaining data to the stream and closes it and the object to which it's attached. This also happens automatically when the stream is garbage collected. @example (getchar-stream STREAM) @end example Return a single character from the stream. (This may be a single byte depending on the nature of the stream). This is actually a macro with an extremely efficient implementation (as efficient as you can get in Emacs Lisp), so that this can be used without fear in a loop. The implementation works by reading a large amount of data into a vector and then simply using the function AREF to read characters one by one from the vector. Because AREF is one of the primitives handled specially by the byte interpreter, this will be very efficient. The actual implementation may in fact use the function call-with-condition-handler to avoid the necessity of checking for overflow. Its typical implementation is to fetch the vector containing the characters as a stream property, as well as the index into that vector. Then it retrieves the character and increments the value and stores it back in the stream. As a first implementation, we check to see when we are reading the character whether the character would be out of range. If so, we read another 4096 characters, storing them into the same vector, setting the index back to the beginning, and then proceeding with the rest of the getchar algorithm. @example (putchar-stream STREAM CHAR) @end example This is similar to getchar-stream but it writes data instead of reading data. @example Function make-stream @end example There are actually two stream-creation functions, which are: @example (make-input-stream TYPE PROPERTIES) (make-output-stream TYPE PROPERTIES) @end example These can be used to create a stream that reads data, or writes data, respectively. PROPERTIES is a property list and the allowable properties in it are defined by the type. Possible types are: @enumerate @item @code{file} (this reads data from a file or writes to a file) Allowable properties are: @table @code @item :file-name (the name of the file) @item :create (for output streams only, creates the file if it doesn't already exist) @item :exclusive (for output streams only, fails if the file already exists) @item :append (for output streams only; starts appending to the end of the file rather than overwriting the file) @item :offset (positions in bytes in the file where reading or writing should begin. If unspecified, defaults to the beginning of the file or to the end of the file when :appended specified) @item :count (for input streams only, the number of bytes to read from the file before signaling "end of file". If nil or omitted, the number of bytes is unlimited) @item :non-blocking (if true, reads or writes will fail if the operation would block. This only makes sense for non-regular files). @end table @item @code{process} (For output streams only, send data to a process.) Allowable properties are: @table @code @item :process (the process object) @end table @item @code{buffer} (Read from or write to a buffer.) Allowable properties are: @table @code @item :buffer (the name of the buffer or the buffer object.) @item :start (the position to start reading from or writing to. If nil, use the buffer point. If true, use the buffer's point and move point beyond the end of the data read or written.) @item :end (only for input streams, the position to stop reading at. If nil, continue to the end of the buffer.) @item :ignore-accessible (if true, the default for :start and :end ignore any narrowing of the buffer.) @end table @item @code{stream} (read from or write to a lisp stream) Allowable properties are: @table @code @item :stream (the stream object) @item :offset (the position to begin to be reading from or writing to) @item :length (For input streams only, the amount of data to read, defaulting to the rest of the data in the string. Revise string for output streams only if true, the stream is resized as necessary to accommodate data written off the end, otherwise the writes will fail. @end table @item @code{memory} (For output only, writes data to an internal memory buffer. This is more lightweight than using a Lisp buffer. The function memory-stream-string can be used to convert the memory into a string.) @item @code{debugging} (For output streams only, write data to the debugging output.) @item @code{stream-device} (During non-interactive invocations only, Read from or write to the initial stream terminal device.) @item @code{function} (For output streams only, send data by calling a function, exactly as with the STREAM argument to the print primitive.) Allowable Properties are: @table @code @item :function (the function to call. The function is called with one argument, the stream.) @end table @item @code{marker} (Write data to the location pointed to by a marker and move the marker past the data.) Allowable properties are: @table @code @item :marker (the marker object.) @end table @item @code{decoding} (As an input stream, reads data from another stream and decodes it according to a coding system. As an output stream decodes the data written to it according to a coding system and then writes results in another stream.) Properties are: @table @code @item :coding-system (the symbol of coding system object, which defines the decoding.) @item :stream (the stream on the other end.) @end table @item @code{encoding} (As an input stream, reads data from another stream and encodes it according to a coding system. As an output stream encodes the data written to it according to a coding system and then writes results in another stream.) Properties are: @table @code @item :coding-system (the symbol of coding system object, which defines the encoding.) @item :stream (the stream on the other end.) @end table @end enumerate Consider @example (define-stream-type 'type :read-function :write-function :rewind- :seek- :tell- (?:buffer) @end example Old Notes: Expose lstreams as hash (put get etc. properties) table. @example (write-stream stream string) (read-stream stream &optional n sequence) (make-stream ...) (push-stream-marker stream) returns ID prob a stream marker object (pop-stream-marker stream) backs up stream to last marker (unread-stream stream string) (stream-available-chars stream) (seek-stream stream n) (delete-stream stream n) (delete-stream-marker stream ic) can always be poe only nested if you have set stream marker (get-char-stream @strong{generalizes} stream) a macro that tries to be efficient perhaps by reading the next e.g. 512 characters into a vector and arefing them. Might check aref optimization for vectors in the byte interpreter. (make-stream 'process :process ... :type write) Consider (define-stream-type 'type :read-function :write-function :rewind- :seek- :tell- (?:buffer) @end example @node Future Work -- Multiple Values, Future Work -- Macros, Future Work -- Lisp Stream API, Future Work @section Future Work -- Multiple Values @cindex future work, multiple values @cindex multiple values, future work Author: @uref{mailto:ben@@xemacs.org,Ben Wing} On low level, all funs that can return multiple values are defined with DEFUN_MULTIPLE_VALUES and have an extra parameter, a struct mv_context *. It has to be this way to ensure that only the fun itself, and no called funs, think they're called in an mv context. apply, funcall, eval might propagate their mv context to their children? Might need eval-mv to implement calling a fun in an mv context. Maybe also funcall_mv? apply_mv? Generally, just set up context appropriately. Call fun (noticing whether it's an mv-aware fun) and binding values on the way back or passing them out. (e.g. to multiple-value-bind) @subheading Common Lisp multiple values, required for specifier improvements. The multiple return values from get-specifier should allow the specifier value to be modified in the correct fashion (i.e. should interact correctly with all manner of changes from other callers) using set-specifier. We should check this and see if we need other return values. (how-to-add? inst-list?) In C, call multiple-values-context to get number of expected values, and multiple-value-set (#, value) to get values other than the first. (Returns Qno_value, or something, if there are no values. #### Or should throw? Probably not. #### What happens if a fn returns no values but the caller expects a #### value? Something like @code{funcall_with_multiple_values()} for setting up the context. For efficiency, byte code could notice Ffuncall to m.v. functions and sub in special opcodes during load in processing, if it mattered. @node Future Work -- Macros, Future Work -- Specifiers, Future Work -- Multiple Values, Future Work @section Future Work -- Macros @cindex future work, macros @cindex macros, future work Author: @uref{mailto:ben@@xemacs.org,Ben Wing} @enumerate @item Option to control whether beep really kills a macro execution. @item Recently defined macros are remembered on a stack, so accidentally defining another one doesn't fuck you up. You can "rotate" anonymous macros or just pick one (numbered) to put on tags, so it works with execute macro - menu shows the anonymous macro, and lists some keystrokes. Normally numbered but you can easily assign to named fun or to keyboard sequence or give it a number (or give it a letter accelerator?) @end enumerate @node Future Work -- Specifiers, Future Work -- Display Tables, Future Work -- Macros, Future Work @section Future Work -- Specifiers @cindex future work, specifiers @cindex specifiers, future work Author: @uref{mailto:ben@@xemacs.org,Ben Wing} @subheading Ideas To Work On When Their Time Has Come @itemize @item specifier-instance returns additional params (multiple-value) - the instantiator used, the associated tag set, the locale found in, a code that can be passed in as an additional param RESTART to restart an instantiation process, e.g. to allow an instantiator to "inherit" from another one higher up. Also, domain can be 'global (look only in global specs) or "complex" - a list of the actual locales to look in (e.g. a buffer - frame - a device - 'global) @item pragmatic-specifier-domain (locale) Converts a locale into a domain in a way that's "pragmatic" - does what most users expect will happen, but is not clean. In particular, handling of "buffer" requires trickiness, as mentioned before. @item ensure-instantiator-exists (specifier locale) Ensures an actual instantiator exists in a locale, so that it can later be futzed with. If none exists, one is constructed by first calling pragmatic-specifier domain and then specifier-instance and fetching out the instantiator for this call. @item map-modifying-instantiators (specifier fun &optional locale tag-set) Same args as map-specifier, but use the return value from the fun to replace the instantiator. Called with three args (instantiator locale tag-set) @item map-modifying-instantiators-force (specifier fun &optional locale tag-set) Same as previous, but calls ensure-instantiator-exists on each locale before processing. @end itemize NOTE: Can do preliminary implementation without Multiple Values - instead create fun specifier-instance - that returns a list (and will be deleted at some point) @subheading specifier &c changes for glyphs @enumerate @item @itemize @bullet @item resizable vectors with funs to insert, delete elements (elements shift accordingly) @item gap array vectors as an implementation of resizing vectors. @end itemize @item You can @code{put} @code{get}, etc. on vectors to modify properties within them. @item copy-over routines routines that carefully copy one complex item OVER another one, destroying the second in the process. I wrote one for lists. Need a general copy-over-tree. @item improvement to specifier mapping routines e.g. map-modifying-instantiator and its force versions below, so that we could implement in turns. @item put-specifier-property (specifier which finds the key, value instantiator in the locale, &opt locale possibly creating one tag-set) if necessary and goes into the vector, changes it, and puts it back into the specifier. @item Smarter add-spec-to-specifier If it notices that it's just replacing one instantiator with another, instead of just copy-tree the first one and throw away the other, use copy-over-tree to save lots of garbage when repeatedly called. ILLEGIBLE: GOTO LOO BUI BUGS LAST PNOTE @item When at image instantiate: @itemize @bullet @item Some properties in the instantiators could be implemented through dynamically modifying an existing image instance (e.g. when the value of a slider or progress bar or text in a text field changes). So when we hash, we only hash the part of the instantiator that cannot be dynamically modified (We might need to do something tricky here - allowing a :key property in hash tables or @strong{ILLEGIBLE}). Anyway, so we need to generate an image instance, and we mask off the dynamic properties and look up in our hash table, and we get something back! But is it ours to modify? (We already checked to see it wasn't exactly the same dynamic properties that it had) Thus --- @end itemize @item Reference counting. Somehow or other, each image instance in the cache needs to keep track of the instantiators that generated it. @end enumerate It might do this through some sort of special instantiator-reference object. This points to the instantiator, where in the hierarchy the instantiator is etc. When an instantiator gets removed, this gu*ILLEGIBLE* values report not attached. Somehow that gets communicated back to the image instance in the cache. So somehow or other, the image instance in the cache knows who's using them and so when you go and keep updating the slider value, by simply modifying an instantiator, which efficiently changes the internal structure of this specifier - eventually image instantiate notices that the image instance it points has no other user and just modifiers it, but in complex situations, some optimizations get lost, but everything is still correct. vs. Andy's set-image-instance-property, which achieves the same optimizations much more easily, but @enumerate @item falls apart in any more complicated system @item only works because of the way the caching system in XEmacs works. Any change (e.g. @strong{ILLEGIBLE} more of making the caches GQ instead of GQ) is likely to make things stop working right in all but the simplest situation. @end enumerate @subheading Specifier improvements for support of specifier inheritance (necessary for the new font mapping API) 'Fallback should be a locale/domain. @example (get-specifier specifier &optional locale) #### If locale is omitted, should it be (current-buffer) or 'global? #### Should argument not be optional? @end example If a buffer is specified: find a window showing buffer by looking @itemize @bullet @item at selected window @item at other windows on selected frame @item at selected windows on other frames in selected device @item at other windows on "" @item at selected windows on selected frames on other devices in selected console. @item other windows sel from other devices sel con @item "" oth "" sel @item sel win sel from sel dev oth con @item oth win sel from sel dev oth con @item sel win oth from sel dev oth con @item oth win oth from sel dev oth con @item sel win sel from oth dev oth con @item oth win sel from oth dev oth con @item oth win oth from oth dev oth con @end itemize If none, use buffer -> sel from -> etc. @example Returns multiple values second is instantiator third is locale containing inst. fourth is tag set (restart-specifier-instance ...) @end example like specifier-instance, but allows restarting the lookup, for implementing inheritance, etc. Obsoletes specifier-matching-find-charset, or whatever it is. The restart argument is opaque, and is returned as a multiple value of restart-specifier-instance. (It's actually an integer with the low bits holding the locale and the other bits count int to the list) attached to the locale.) @node Future Work -- Display Tables, Future Work -- Making Elisp Function Calls Faster, Future Work -- Specifiers, Future Work @section Future Work -- Display Tables @cindex future work, display tables @cindex display tables, future work Author: @uref{mailto:ben@@xemacs.org,Ben Wing} #### It would also be really nice if you could specify that the characters come out in hex instead of in octal. Mule does that by adding a @code{ctl-hexa} variable similar to @code{ctl-arrow}, but that's bogus -- we need a more general solution. I think you need to extend the concept of display tables into a more general conversion mechanism. Ideally you could specify a Lisp function that converts characters, but this violates the Second Golden Rule and besides would make things way way way way slow. So instead, we extend the display-table concept, which was historically limited to 256-byte vectors, to one of the following: @enumerate @item A 256-entry vector, for backward compatibility; @item char-table, mapping characters to values; @item range-table, mapping ranges of characters to values; @item a list of the above. @end enumerate The fourth option allows you to specify multiple display tables instead of just one. Each display table can specify conversions for some characters and leave others unchanged. The way the character gets displayed is determined by the first display table with a binding for that character. This way, you could call a function @code{enable-hex-display} that adds a hex display-table to the list of display tables for the current buffer. #### ...not yet implemented... Also, we extend the concept of "mapping" to include a printf-like spec. Thus you can make all extended characters show up as hex with a display table like this: @example #s(range-table data ((256 524288) (format "%x"))) @end example Since more than one display table is possible, you have great flexibility in mapping ranges of characters. @node Future Work -- Making Elisp Function Calls Faster, Future Work -- Lisp Engine Replacement, Future Work -- Display Tables, Future Work @section Future Work -- Making Elisp Function Calls Faster @cindex future work, making Elisp function calls faster @cindex making Elisp function calls faster, future work Author: @uref{mailto:ben@@xemacs.org,Ben Wing} @strong{Abstract: }This page describes many optimizations that can be made to the existing Elisp function call mechanism without too much effort. The most important optimizations can probably be implemented with only a day or two of work. I think it's important to do this work regardless of whether we eventually decide to replace the Lisp engine. Many complaints have been made about the speed of Elisp, and in particular about the slowness in executing function calls, and rightly so. If you look at the implementation of the @code{funcall} function, you'll notice that it does an incredible amount of work. Now logically, it doesn't need to be so. Let's look first from the theoretical standpoint at what absolutely needs to be done to call a Lisp function. First, let's look at the situation that would exist if we were smart enough to have made lexical scoping be the default language policy. We know at compile time exactly which code can reference the variables that are the formal parameters for the function being called (specifically, only the code that is part of that function's definition) and where these references are. As a result, we can simply push all the values of the variables onto a stack, and convert all the variable references in the function definition into stack references. Therefore, binding lexically-scoped parameters in preparation for a function call involves nothing more than pushing the values of the parameters onto a stack and then setting a new value for the frame pointer, at the same time remembering the old one. Because the byte-code interpreter has a stack-based architecture, however, the parameter values have already been pushed onto the stack at the time of the function call invocation. Therefore, binding the variables involves doing nothing at all, other than dealing with the frame pointer. With dynamic scoping, the situation is somewhat more complicated. Because the parameters can be referenced anywhere, and these references cannot be located at compile time, their values have to be stored into a global table that maps the name of the parameter to its current value. In Elisp, this table is called the @dfn{obarray}. Variable binding in Elisp is done using the C function @code{specbind()}. (This stands for "special variable binding" where @dfn{special} is the standard Lisp terminology for a dynamically-scoped variable.) What @code{specbind()} does, essentially, is retrieve the old value of the variable out of the obarray, remember the value by pushing it, along with the name of the variable, onto what's called the @dfn{specpdl} stack, and then store the new value into the obarray. The term "specpdl" means @dfn{Special Variable Pushdown List}, where @dfn{Pushdown List} is an archaic computer science term for a stack that used to be popular at MIT. These binding operations, however, should still not take very much time because of the use of symbols, i.e. because the location in the obarray where the variable's value is stored has already been determined (specifically, it was determined at the time that the byte code was loaded and the symbol created), so no expensive hash table lookups need to be performed. An actual function invocation in Elisp does a great deal more work, however, than was just outlined above. Let's just take a look at what happens when one byte-compiled function invokes another byte-compiled function, checking for places where unnecessary work is being done and determining how to optimize these places. @enumerate @item The byte-compiled function's parameter list is stored in exactly the format that the programmer entered it in, which is to say as a Lisp list, complete with @code{&optional} and @code{&rest} keywords. This list has to be parsed for @emph{every} function invocation, which means that for every element in a list, the element is checked to see whether it's the @code{&optional} or @code{&rest} keywords, its surrounding cons cell is checked to make sure that it is indeed a cons cell, the @code{QUIT} macro is called, etc. What should be happening here is that the argument list is parsed exactly once, at the time that the byte code is loaded, and converted into a C array. The C array should be stored as part of the byte-code object. The C array should also contain, in addition to the symbols themselves, the number of required and optional arguments. At function call time, the C array can be very quickly retrieved and processed. @item For every variable that is to be bound, the @code{specbind()} function is called. This actually does quite a lot of things, including: @enumerate @item Checking the symbol argument to the function to make sure it's actually a symbol. @item Checking for specpdl stack overflow, and increasing its size as necessary. @item Calling @code{symbol_value_buffer_local_info()} to retrieve buffer local information for the symbol, and then processing the return value from this function in a series of if statements. @item Actually storing the old value onto the specpdl stack. @item Calling @code{Fset()} to change the variable's value. @end enumerate @end enumerate The entire series of calls to @code{specbind()} should be inline and merged into the argument processing code as a single tight loop, with no function calls in the vast majority of cases. The @code{specbind()} logic should be streamlined as follows: @enumerate @item The symbol argument type checking is unnecessary. @item The check for the specpdl stack overflow needs to be done only once, not once per argument. @item All of the remaining logic should be boiled down as follows: @enumerate @item Retrieve the old value from the symbol's value cell. @item If this value is a symbol-value-magic object, then call the real @code{specbind()} to do the work. @item Otherwise, we know that nothing complicated needs to be done, so we simply push the symbol and its value onto the specpdl stack, and then replace the value in the symbol's value cell. @item The only logic that we are omitting is the code in @code{Fset()} that checks to make sure a constant isn't being set. These checks should be made at the time that the byte code for the function is loaded and the C array of parameters to the function is created. (Whether a symbol is constant or not is generally known at XEmacs compile time. The only issue here is with symbols whose names begin with a colon. These symbols should simply be disallowed completely as parameter names.) @end enumerate @end enumerate Other optimizations that could be done are: @itemize @item At the beginning of the function that implements the byte-code interpreter (this is the Lisp primitive @code{byte-code}), the string containing the actual byte code is converted into an array of integers. I added this code specifically for MULE so that the byte-code engine didn't have to deal with the complexities of the internal string format for text. This conversion, however, is generally useful because on modern processors accessing 32-bit values out of an array is significantly faster than accessing unaligned 8-bit values. This conversion takes time, though, and should be done once at load time rather than each time the byte code is executed. This array should be stored in the byte-code object. Currently, this is a bit tricky to do, because @code{byte-code} is not actually passed the byte-code object, but rather three of its elements. We can't just change @code{byte-code} so that it is directly passed the byte-code object because this function, with its existing argument calling pattern, is called directly from compiled Elisp files. What we can and should do, however, is create a subfunction that does take a byte-code object and actually implements the byte-code interpreter engine. Whenever the C code wants to execute byte code, it calls this subfunction. @code{byte-code} itself also calls this subfunction after conjuring up an appropriate byte-code object and storing its arguments into this object. With a small amount of work, it's possible to do this conjuring in such a way that it doesn't generate any garbage. @item At the end of a function call, the parameter bindings that have been done need to be undone. This is standardly done by calling @code{unbind_to()}. Just as for a @code{specbind()}, this function does a lot of work that is unnecessary in the vast majority of cases, and it could also be inlined and streamlined. @item As part of each Elisp function call, a whole bunch of checks are done for a series of unlikely but possible conditions that may occur. These include, for example, @itemize @item Calling the @code{QUIT} macro, which essentially involves checking a global volatile variable to see whether additional processing needs to be done. @item Checking whether a garbage collection needs to be done. @item Checking the variable @code{debug_on_next_call}. @item Checking for whether Elisp profiling is active. (An additional optimization that's perhaps not worth the effort is to do some post-processing on the array of integers after it has been converted. For example, whenever a 16-bit value occurs in the byte code, it has to be encoded as two separate 8-bit values. These values could be combined. The tricky part here is that all of the places where a goto occurs across the place where this modification is made would have to have their offsets changed. Other such optimizations can easily be imagined as well.) @end itemize @item With a little bit smarter code, it should be possible to make a single trip variable that indicates whether any of these conditions is true. This variable would be updated by any code that changes the actual variables whose values are checked in the various checks just mentioned. (By the way, all of this is occurring in the C function @code{funcall_recording_as()}.) There is a little bit of code between each of the checks. This code would simply have to be duplicated between the two cases where this general trip variable is true and is false. (Note: the optimization detailed in this item is probably not worth doing on the first pass.) @end itemize @node Future Work -- Lisp Engine Replacement, , Future Work -- Making Elisp Function Calls Faster, Future Work @section Future Work -- Lisp Engine Replacement @cindex future work, lisp engine replacement @cindex lisp engine replacement, future work @menu * Future Work -- Lisp Engine Discussion:: * Future Work -- Lisp Engine Replacement -- Implementation:: * Future Work -- Startup File Modification by Packages:: @end menu @node Future Work -- Lisp Engine Discussion, Future Work -- Lisp Engine Replacement -- Implementation, Future Work -- Lisp Engine Replacement, Future Work -- Lisp Engine Replacement @subsection Future Work -- Lisp Engine Discussion @cindex future work, lisp engine discussion @cindex lisp engine discussion, future work Author: @uref{mailto:ben@@xemacs.org,Ben Wing} @strong{Abstract: }Recently there has been a great deal of talk on the XEmacs mailing lists about potential changes to the XEmacs Lisp engine. Usually the discussion has centered around the question which is better, Common Lisp or Scheme? This is certainly an interesting debate topic, but it didn't seem to have much practical relevance to me, so I vowed to stay out of the discussion. Recently, however, it seems that people are losing sight of the broader picture. For example, nobody seems to be asking the question, ``"Would an extension language other than Lisp or Scheme (perhaps not a Lisp variant at all) be more appropriate?"'' Nor does anybody seem to be addressing what I consider to be the most fundamental question, is changing the extension language a good thing to do? I think it would be a mistake at this point in XEmacs development to begin any project involving fundamental changes to the Lisp engine or to the XEmacs Lisp language itself. It would take a huge amount of effort to complete even part of this project, and would be a major drain on the already-insufficient resources of the XEmacs development community. Most of the gains that are purported to stem from a project such as this could be obtained with far less effort by making more incremental changes to the XEmacs core. I think it would be an even bigger mistake to change the actual XEmacs extension language (as opposed to just changing the Lisp engine, making few, if any, externally visible changes). The only language change that I could possibly imagine justifying would involve switching to some ubiquitous web language, such as Java and JavaScript, or Perl. (Even among those, I think Java would be the only possibility that really makes sense). In the rest of this document I'll present the broader issues that would be involved in changing the Lisp engine or extension language. This should make clear why I've come to believe as I do. @subheading Is everyone clear on the difference between interface and implementation? There seems to be a great deal of confusion concerning the difference between interface and implementation. In the context of XEmacs, changing the interface means switching to a different extension language such as Common Lisp, Scheme, Java, etc. Changing the implementation means using a different Lisp engine. There is obviously some relation between these two issues, but there is no particular requirement that one be changed if the other is changed. It is quite possible, for example, to imagine taking the underlying engine for any of the various Lisp dialects in existence, and adapting it so that it implements the same Elisp extension language that currently exists. The vast majority of the purported benefits that we would get from changing the extension language could just as easily be obtained while making minimal changes to the external Elisp interface. This way nearly all existing Elisp programs would continue to work, there would be no need to translate Elisp programs into some other language or to simultaneously support two incompatible Lisp variants, and there would be no need for users or package authors to learn a new extension language that would be just as unfamiliar to the vast majority of them as Elisp is. @subheading Why should we change the Lisp engine? Let's go over the possible reasons for changing the Lisp engine. @subsubheading Speed. Changing the Lisp engine might make XEmacs faster. However, consider the following. @enumerate @item XEmacs will get faster over time without any development effort at all because computers will get faster. @item Perhaps the biggest causes of the slowness of XEmacs are not related to the Lisp engine at all. It has been asserted, for example, that the slowness of XEmacs is primarily due to the redisplay mechanism, to the handling of insertion and deletion of text in a buffer, to the event loop, etc. Nobody has done any real studies to determine what the actual cause of slowness is. @item Emacs 18 seems plenty fast enough to most people. However, Emacs 18 also had a worse Lisp engine and a worse byte compiler than XEmacs. @item Significant speed increases in the execution of Lisp code could be achieved without too much effort by working on the existing byte code interpreter and function call mechanism a bit. @end enumerate @subsubheading Memory usage. A new Lisp engine with a better garbage collection mechanism might make more efficient use of memory; for example, through the use of a relocating garbage collector. However, consider this: @enumerate @item A new Lisp engine would probably have a larger memory footprint, perhaps a significantly larger one. @item The worst memory problems might not be due to Lisp object inefficiency at all. The problems could simply be due mainly to the inefficient buffer representation. Nobody has come up with any concrete numbers on where the real problem lies. @end enumerate @subsubheading Robustness. A new Lisp engine might well be more robust. (On the other hand, it might not be. It is not always easy to tell). However, I think that the biggest problems with robustness are in the part of the C code that is not concerned with implementing the Lisp engine. The redisplay mechanism and the unexec mechanism are probably the biggest sources of robustness problems. I think the biggest robustness problems that are related to the Lisp engine concern the use of GCPRO declarations. The entire GCPRO mechanism is ill-conceived and unsafe. The only real way to make this safe would be to do conservative garbage collection over the C stack and to eliminate the GCPRO declarations entirely. But how many of the Lisp engines that are being considered have such a mechanism built into them? @subsubheading Maintainability. A new Lisp engine might well improve the maintainability of XEmacs by offloading the maintenance of the Lisp engine. However, we need to make very sure that this is, in fact, the case before embarking on a project like this. We would almost certainly have to make significant modifications to any Lisp engine that we choose to integrate, and without the active and committed support and cooperation of the developers of that Lisp engine, the maintainability problem would actually get worse. @subsubheading Features. A new Lisp engine might have built in support for various features that we would like to add to the XEmacs extension language, such as lexical scoping and an object system. @subheading Why would we want to change the extension language? Possible reasons for changing the extension language include: @subsubheading More standard. Switching to a language that is more standard and more commonly in use would be beneficial for various reasons. First of all, the language that is more commonly used and more familiar would make it easier for users to write their own extensions and in general, increase the acceptance of XEmacs. Also, an accepted standard probably has had a lot more thought put into it than any language interface created by the XEmacs developers themselves. Furthermore, if our extension language is being actively developed and supported, much of the work that we would otherwise have to do ourselves is transferred elsewhere. However, both Scheme and Common Lisp flunk the familiarity test. Neither language is being actively used for program development outside of small research communities, and few prospective authors of XEmacs extensions will be familiar with any Lisp variant for real world uses. (I consider the argument that Scheme is often used in introductory programming courses to be irrelevant. Many existing programmers were taught Pascal in their introductory programming courses. How many of them would actually be comfortable writing a program in Pascal?) Furthermore, someone who wants to learn Lisp can't exactly go to their neighborhood bookstore and pick up a book on this topic. @subsubheading Ease of use. There are endless arguments about which language is easiest to use. In practice, this largely boils down to which languages are most familiar. @subsubheading Object oriented. The object-oriented paradigm is the dominant one in use today for new languages. User interface concepts in particular are expressed very naturally in an object-oriented system. However, neither Scheme nor Common Lisp has been designed with object orientation in mind. There is a standard object system for Common Lisp, but it is extremely complex and difficult to understand. @node Future Work -- Lisp Engine Replacement -- Implementation, Future Work -- Startup File Modification by Packages, Future Work -- Lisp Engine Discussion, Future Work -- Lisp Engine Replacement @subsection Future Work -- Lisp Engine Replacement -- Implementation @cindex future work, lisp engine replacement, implementation @cindex lisp engine replacement, implementation, future work Author: @uref{mailto:ben@@xemacs.org,Ben Wing} Let's take a look at the sort of work that would be required if we were to replace the existing Elisp engine in XEmacs with some other engine, for example, the Clisp engine. I'm assuming here, of course, that we are not going to be changing the interface here at the same time, which is to say that we will be keeping the same Elisp language that we currently have as the extension language for XEmacs, except perhaps for incremental changes that we will make, such as lexical scoping and proper structure support in an attempt to gradually move the language towards an upwardly-compatible goal, such as Common Lisp. I am writing this page primarily as food for thought. I feel fairly strongly that actually doing this work would be a big waste of effort that would inevitably become a huge time sink on the part of nearly everyone involved in XEmacs development, and not only for the ones who were supposed to be actually doing the engine change. I feel that most of the desired changes that we want for the language and/or the engine can be achieved with much less effort and time through incremental changes to the existing code base. First of all, in order to make a successful Lisp engine change in XEmacs, it is vitally important that the work be done through a series of incremental stages where at the end of each stage XEmacs can be compiled and run, and it works. It is tempting to try to make the change all at once, but this would be disastrous. If the resulting product worked at all, it would inevitably contain a huge number of subtle and extremely difficult to track down bugs, and it would be next to impossible to determine which of the myriad changes made introduced the bug. Now let's look at what the possible stages of implementation could be. @subsubheading An Extra C Preprocessing Stage The first step would be to introduce another preprocessing stage for the XEmacs C code, which is done before the C compiler itself is invoked on the code, and before the standard C preprocessor runs. The C preprocessor is simply not powerful enough to do many of the things we would like to do in the C code. The existing results of this have been a combination of a lot of hacked up and tricky-to-maintain stuff (such as the @code{DEFUN} macro, and the associated @code{DEFSUBR}), as well as code constructs that are difficult to write. (Consider for example, attempting to do structured exception handling, such as catch/throw and unwind-protect constructs), as well as code that is potentially or actually unsafe (such as the uses of @code{alloca}), which could easily cause stack overflow with large amounts of memory allocated in this fashion.) The problem is that the C preprocessor does not allow macros to have the power of an actual language, such as C or Lisp. What our own preprocessor should do is allow us to define macros, whose definitions are simply functions written in some language which are executed at compile time, and whose arguments are the actual argument for the macro call, as well as an environment which should have a data structure representation of the C code in the file and allow this environment to be queried and modified. It can be debated what the language should be that these extensions are written in. Whatever the language chosen, it needs to be a very standard language and a language whose compiler or interpreter is available on all of the platforms that we could ever possibly consider putting XEmacs to, which is basically to say all the platforms in existence. One obvious choice is C, because there will obviously be a C compiler available, because it is needed to compile XEmacs itself. Another possibility is Perl, which is already installed on most systems, and is universally available on all others. This language has powerful text processing facilities which would probably make it possible to implement the macro definitions more quickly and easily; however, this might also encourage bad coding practices in the macros (often simple text processing is not appropriate, and more sophisticated parsing or recursive data structure processing needs to be done instead), and we'd have to make sure that the nested data structure that comprises the environment could be represented well in Perl. Elisp would not be a good choice because it would create a bootstrapping problem. Other possible languages, such as Python, are not appropriate, because most programmers are unfamiliar with this language (creating a maintainability problem) and the Python interpreter would have to be included and compiled as part of the XEmacs compilation process (another maintainability problem). Java is still too much in flux to be considered at this point. The macro facility that we will provide needs to add two features to the language: the ability to define a macro, and the ability to call a macro. One good way of doing this would be to make use of special characters that have no meaning in the C language (or in C++ for that matter), and thus can never appear in a C file outside of comments and strings. Two obvious characters are the @@ sign and the $ sign. We could, for example, use @code{@@} defined to define new macros, and the @code{$} sign followed by the macro name to call a macro. (Proponents of Perl will note that both of these characters have a meaning in Perl. This should not be a problem, however, because the way that macros are defined and called inside of another macro should not be through the use of any special characters which would in effect be extending the macro language, but through function calls made in the normal way for the language.) The program that actually implements this extra preprocessing stage needs to know a certain amount about how to parse C code. In particular, it needs to know how to recognize comments, strings, character constants, and perhaps certain other kinds of C tokens, and needs to be able to parse C code down to the statement level. (This is to say it needs to be able to parse function definitions and to separate out the statements, @code{if} blocks, @code{while} blocks, etc. within these definitions. It probably doesn't, however need to parse the contents of a C expression.) The preprocessing program should work first by parsing the entire file into a data structure (which may just contain expressions in the form of literal strings rather than a data structure representing the parsed expression). This data structure should become the environment parameter that is passed as an argument to macros as mentioned above. The implementation of the parsing could and probably should be done using @code{lex} and @code{yacc}. One good idea is simply to steal some of the @code{lex} and @code{yacc} code that is part of GCC. Here are some possibilities that could be implemented as part of the preprocessing: @enumerate @item A proper way of doing the @code{DEFUN} macros. These could, for example, take an argument list in the form of a Lisp argument list (complete with keyword parameters and other complex features) and automatically generate the appropriate @code{subr} structure, the appropriate C function definition header, and the appropriate call to the @code{DEFSUBR} initialization function. @item A truly safe and easy to use implementation of the @code{alloca} function. This could allocate the memory in any fashion it chooses (calling @code{malloc} using a large global array, or a series of such arrays, etc.) an @code{insert} in the appropriate places to automatically free up this memory. (Appropriate places here would be at the end of the function and before any return statements. Non-local exits can be handled in the function that actually implements the non-local exit.) @item If we allow for the possibility of having an arbitrary Lisp engine, we can't necessarily assume that we can call Lisp primitives implemented in C from other C functions by simply making a function all. Perhaps something special needs to happen when this is done. This could be handled fairly easily by having our new and improved @code{DEFUN} macro define a new macro for use when calling a primitive. @end enumerate @subsubheading Make the Existing Lisp Engine be Self-contained. The goal of this stage is to gradually build up a self-contained Lisp engine out of the existing XEmacs core, which has no dependencies on any of the code elsewhere in the XEmacs core, and has a well-defined and black box-style interface. (This is to say that the rest of the C code should not be able to access the implementation of the Lisp engine, and should make as few assumptions as possible about how this implementation works). The Lisp engine could, and probably should, be built up as a separate library which can be compiled on its own without any of the rest of the XEmacs C code, and can be tested in this configuration as well. The creation of this engine library should be done as a series of subsets, each of which moves more code out of the XEmacs core and into the engine library, and XEmacs should be compilable and runnable between each sub-step. One possible series of sub-steps would be to first create an engine that does only object allocation and garbage collection, then as a second sub-step, move in the code that handles symbols, symbol values, and simple binding, and then finally move in the code that handles control structures, function calling, @code{byte-code} execution, exception handling, etc. (It might well be possible to further separate this last sub-step). @subsubheading Removal of Assumptions About the Lisp Engine Implementation Currently, the XEmacs C code makes all sorts of assumptions about the implementation of the Lisp engine, particularly in the areas of object allocation, object representation, and garbage collection. A different Lisp engine may well have different ways of doing these implementations, and thus the XEmacs C code must be rid of any assumptions about these implementations. This is a tough and tedious job, but it needs to be done. Here are some examples: @enumerate @item @code{GCPRO} must go. The @code{GCPRO} mechanism is tedious, error-prone, unmaintainable, and fundamentally unsafe. As anyone who has worked on the C Core of XEmacs knows, figuring out where to insert the @code{GCPRO} calls is an exercise in black magic, and debugging crashes as a result of incorrect @code{GCPROing} is an absolute nightmare. Furthermore, the entire mechanism is fundamentally unsafe. Even if we were to use the extra preprocessing stage detailed above to automatically generate @code{GCPRO} and @code{UNGCPRO} calls for all Lisp object variables occurring anywhere in the C code, there are still places where we could be bitten. Consider, for example, code which calls @code{cons} and where the two arguments to this functions are both calls to the @code{append} function. Now the @code{append} function generates new Lisp objects, and it also calls @code{QUIT}, which could potentially execute arbitrary Lisp code and cause a garbage collection before returning control to the @code{append} function. Now in order to generate the arguments to the @code{cons} function, the @code{append} function is called twice in a row. When the first @code{append} call returns, new Lisp data has been created, but has no @code{GCPRO} pointers to it. If the second @code{append} call causes a garbage collection, the Lisp data from the first @code{append} call will be collected and recycled, which is likely to lead to obscure and impossible-to-debug crashes. The only way around this would be to rewrite all function calls whose parameters are Lisp objects in terms of temporary variables, so that no such function calls ever contain other function calls as arguments. This would not only be annoying to implement, even in a smart preprocessor, but would make the C code become incredibly slow because of all the constant updating of the @code{GCPRO} lists. @item The only proper solution here is to completely do away with the @code{GCPRO} mechanism and simply do conservative garbage collection over the C stack. There are already portable implementations of conservative pointer marking over the C stack, and these could easily be adapted for use in the Elisp garbage collector. If, as outlined above, we use an extra preprocessing stage to create a new version of @code{alloca} that allocates its memory elsewhere than actually on the C stack, and we ensure that we don't declare any large arrays as local variables, but instead use @code{alloca}, then we can be guaranteed that the C stack is small and thus that the conservative pointer marking stage will be fast and not very likely to find false matches. @item Removing the @code{GCPRO} declarations as just outlined would also remove the assumption currently made that garbage collection can occur only in certain places in the C code, rather than in any arbitrary spot. (For example, any time an allocation of Lisp data happens). In order to make things really safe, however, we also have to remove another assumption as detailed in the following item. @item Lisp objects might be relocatable. Currently, the C code assumes that Lisp objects other than string data are not relocatable and therefore it's safe to pass around and hold onto the actual pointers for the C structures that implement the Lisp objects. Current code, for example, assumes that a @code{Lisp_Object} of type buffer and a C pointer to a @code{struct buffer} mean basically the same thing, and indiscriminately passes the two kinds of buffer pointers around. With relocatable Lisp objects, the pointers to the C structures might change at any time. (Remember, we are now assuming that a garbage collection can happen at basically any point). All of the C code needs to be changed so that Lisp objects are always passed around using a Lisp object type, and the underlying pointers are only retrieved at the time when a particular data element out of the structure is needed. (As an aside, here's another reason why Lisp objects, instead of pointers, should always be passed around. If pointers are passed around, it's conceivable that at the time a garbage collection occurs, the only reference to a Lisp object (for example, a deleted buffer) would be in the form of a C pointer rather than a Lisp object. In such a case, the conservative pointer marking mechanism might not notice the reference, especially if, in an attempt to eliminate false matches and make the code generally more efficient, it will be written so that it will look for actual Lisp object references.) @item I would go a step farther and completely eliminate the macros that convert a Lisp object reference into a C pointer. This way the only way to access an element out of a Lisp object would be to use the macro for that element, which in one atomic operation de-references the Lisp object reference and retrieves the value contained in the element. We probably do need the ability to retrieve actual C pointers, though. For example, in the case where an array is stored in a Lisp object, or simply for efficiency purposes where we might want some code to retrieve the C pointer for a Lisp object, and work on that directly to avoid a whole bunch of extra indirections. I think the way to do this would be through the use of a special locking construct implemented as part of the extra preprocessor stage mentioned above. This would essentially be what you might call a @dfn{lock block}, just like a @code{while} block. You'd write the word @code{lock} followed by a parenthesized expression that retrieves the C pointer and stores it into a variable that is scoped only within the lock block and followed in turn by some code in braces, which is the actual code associated with the lock block, and which can make use of this pointer. While the code inside the lock block is executing, that particular pointer and the object pointed to by it is guaranteed not to be relocated. @item If all the XEmacs C code were converted according to these rules, there would be no restrictions on the sorts of implementations that can be used for the garbage collector. It would be possible, for example, to have an incremental asynchronous relocating garbage collector that operated continuously in another thread while XEmacs was running. @item The C implementation of Lisp objects might not, and probably should not, be visible to the rest of the XEmacs C code. It should theoretically be possible, for example, to implement Lisp objects entirely in terms of association lists, rather than using C structures in the standard way. (This may be an extreme example, but it's good to keep in mind an example such as this when cleaning up the XEmacs C code). The changes mentioned in the previous item would go a long way towards removing this assumption. The only places where this assumption might still be made would be inside of the lock blocks where an actual pointer is retrieved. (Also, of course, we'd have to change the way that Lisp objects are defined in C so that this is done with some function calls and new and improved macros rather than by having the XEmacs C code actually define the structures. This sort of thing would probably have to be done in any case once the allocation mechanism is moved into a separate library.) With some thought it should be possible to define the lock block interface in such a way as to remove any assumptions about the implementation of Lisp objects. @item C code may not be able to call Lisp primitives that are defined in C simply by making standard C function calls. There might need to be some wrapper around all such calls. This could be achieved cleanly through the extra preprocessing step mentioned above, in line with the example described there. @end enumerate @subsubheading Actually Replacing the Engine. Once we've done all of the work mentioned in the previous steps (and admittedly, this is quite a lot of work), we should have an XEmacs that still uses what is essentially the old and previously existing Lisp engine, but which is ready to have its Lisp engine replaced. The replacement might proceed as follows: @enumerate @item Identify any further changes that need to be made to the engine interface that we have defined as a result of the previous steps so that features and idiosyncrasies of various Lisp engines that we examine could be properly supported. @item Pick a Lisp engine and write an interface layer that sits on top of this Lisp engine and makes it adhere to what I'll now call the XEmacs Lisp engine interface. @item Strongly consider creating, if we haven't already done so, a test suite that can test the XEmacs Lisp engine interface when used with a stand-alone Lisp engine. @item Test the hell out of the Lisp engine that we've chosen when combined with its XEmacs Lisp engine interface layer as a stand-alone program. @item Now finally attach this stand-alone program to XEmacs itself. Debug and fix any further problems that ensue (and there inevitably will be such problems), updating the test suite as we go along so that if it were run again on the old and buggy interfaced Lisp engine, it would note the bug. @end enumerate @node Future Work -- Startup File Modification by Packages, , Future Work -- Lisp Engine Replacement -- Implementation, Future Work -- Lisp Engine Replacement @subsection Future Work -- Startup File Modification by Packages @cindex future work, startup file modification by packages @cindex startup file modification by packages, future work Author: @uref{mailto:ben@@xemacs.org,Ben Wing} OK, we need to create a design document for all of this, including: PRINCIPLE #1: Whenever you have auto-generated stuff, @strong{CLEARLY} indicate this in comments around the stuff. These comments get searched for, and used to locate the existing generated stuff to replace. Custom currently doesn't do this. PRINCIPLE #2: Currently, lots of functions want to add code to the .emacs. (e.g. I get prompted for my mail address from add-change-log-entry, and then prompted if I want to make this permanent). There needs to be a Lisp API for working with arbitrary code to be added to a user's startup. This API hides all the details of which file to put the fragment in, where in it, how to mark it with magical comments of the right kind so that previous fragments can be replaced, etc. PRINCIPLE #3: @strong{ALL} generated stuff should be loaded before any user-written init stuff. This way the user can override the generated settings. Although in the case of customize, it may work when the custom stuff is at the end of the init file, it surely won't work for arbitrary code fragments (which typically do @code{setq} or the like). PRINCIPLE #4: As much as possible, generated stuff should be place in separate files from non-generated stuff. Otherwise it's inevitable that some corruption is going to result. PRINCIPLE #5: Packages are encouraged, as much as possible, to work within the customize model and store all their customizations there. However, if they really need to have their own init files, these files should be placed in .xemacs/, given normal names (e.g. @file{saved-abbrevs.el} not .abbrevs), and there should be some magic comment at the top of the file that causes it to get automatically loaded while loading a user's init file. (Alternatively, the above-named API could specify a function that lets a package specify that they want such-and-such file loaded from the init file, and have the specifics of this get handled correctly.) OVERARCHING GOAL: The overarching goal is to provide a unified mechanism for packages to store state and setting information about the user and what they were doing when XEmacs exited, so that the same or a similar environment can be automatically set up the next time. In general, we are working more and more towards being a truly GUI app where users' settings are easy to change and get remembered correctly and consistently from one session to the next, rather than requiring nasty hacking in elisp. Hrvoje, do you have any interest in this? How about you, Martin? This seems like it might be up your alley. This stuff has been ad-hocked since kingdom come, and it's high time that we make this work properly so that it could be relied upon, and a lot of things could "just work". @node Future Work Discussion, Old Future Work, Future Work, Top @chapter Future Work Discussion @cindex future work, discussion @cindex discussion, future work This chapter includes (mostly) email discussions about particular design issues, edited to include only relevant and useful stuff. Ideally over time these could be condensed down to a single design document to go into the normal Future Work section. @menu * Discussion -- Garbage Collection:: * Discussion -- Glyphs:: * Discussion -- Dialog Boxes:: * Discussion -- Multilingual Issues:: * Discussion -- Instantiators and Generic Property Accessors:: * Discussion -- Switching to C++:: * Discussion -- Windows External Widget:: * Discussion -- Packages:: * Discussion -- Distribution Layout:: @end menu @node Discussion -- Garbage Collection, Discussion -- Glyphs, Future Work Discussion, Future Work Discussion @section Discussion -- Garbage Collection @cindex discussion, garbage collection @cindex garbage collection, discussion @menu * Discussion -- Pure Space:: * Discussion -- Hashtable-Based Marking and Cleanup:: * Discussion -- The Anti-Cons:: @end menu @node Discussion -- Pure Space, Discussion -- Hashtable-Based Marking and Cleanup, Discussion -- Garbage Collection, Discussion -- Garbage Collection @subsection Discussion -- Pure Space @cindex discussion, pure space @cindex pure space, discussion On Tue, Oct 12, 1999 at 03:36:59AM -0700, Ben Wing wrote: So what am I missing here? In response, Olivier Galibert wrote: Two things: @enumerate @item The purespace is gone I mean absolutely, completely and utterly removed. Fpurecopy is a no-op now (and have been for some time). Readonly objects are gone too. Having less checks to do in Fsetcar, Fsetcdr, Faset and some others is probably a good thing, speedwise. I have it removed some time ago because it does not make sense when using a portable dumper to copy data in a special area of the memory at dump time and I wanted to be sure that supressing the copying from Fpurecopy wouldn't break things. Now, we want to get the post-dumping data sharing back, of course. In today systems, it is quite easy: you just have to map the file MAP_PRIVATE and avoid writing to the subset of pages you want to keep shared. Copy-on-write does the job for you. It has the nice side effect of completely avoiding bus errors due to trying to write to readonly memory zones. Avoiding writing to the "pure" objects themselves is already done, of course. Would lisp code have written to the purecopied parts of the dumped data that it would have exploded long ago. So there is nothing to do in this area. So the only remaining thing is the markbit. Two possible strategies: @itemize @bullet @item have Fpurecopy mark somehow the lrecords it would have copied in the good old times. Post-dump, use this mark as a "always marked, don't touch, don't look into, don't free" flag, the same way CHECK_PURE was used. @item move the markbit outside of the lrecord. @end itemize The second solution is more appealing to me for a bunch of reasons: @itemize @bullet @item more things are shared than only what is purecopied (not yet used functions come to mind) @item no more "the only references to this non-purecopied object are from purecopied objects, XEmacs will self-destruct in ten seconds" kind of bugs. @item removing flags goes the right way towards implementing Jan's allocator ideas. @item it becomes probably easier to experiment with the GC code @end itemize @item Finding all the dumped objects in order to unmark them sucks Not having to rebuild a list of all the dumped objects in order to find them all and ensure that all are unmarked simplifies things for me. Errr, ok, now that I really think of it, I can rebuild this list easily, in fact. And I'm probably going to have to manage it, since I feel like the lack of calls to the finalizers for the dumped objects is going to someday turn over and bite me in the face. But anyways, it makes my life easier for now. So no, it's not a _necessity_. But it helps. And the automatic sharing of all objects until you write to them explicitely is, I think, really cool. @end enumerate @node Discussion -- Hashtable-Based Marking and Cleanup, Discussion -- The Anti-Cons, Discussion -- Pure Space, Discussion -- Garbage Collection @subsection Discussion -- Hashtable-Based Marking and Cleanup @cindex discussion, hashtable-based marking and cleanup @cindex hashtable-based marking and cleanup, discussion On 10/12/1999 5:49 PM Ben Wing wrote: OK, I can see the advantages. But: @enumerate @item There will be an inevitable loss of speed using a large hashtable. If it's large, I say that it's just not worth it. There are things that are so much more important than futzing around with the garbage collector (e.g. fixing the god damn user interface), things which if not fixed will sooner or later cause XEmacs to die entirely. If we are causing a major slowdown in the name of some not-so-important work that may or may not get done, we shouldn't do it. (On the other hand, if the slowdown is negligible, I have no problems with this.) @item I think you should @strong{expand} the concept of read-only objects so that @strong{any} object (especially strings and cons cells) can get marked read-only by the C code if it wants. (Perhaps you could use the now-unused mark bit to hold a read-only flag.) This is important because it allows C code to directly return internal lists (e.g. from the specifiers and various object property lists) without having to do a copy, like is now done (and similarly, potentially to directly accept lists from a Lisp call without copying them for internal use, if the Lisp caller is made aware that the list might become read-only) -- if the copy weren't done and some piece of Lisp code went and modified the list, XEmacs might very well crash. Thus, this read-only flag would be a huge efficiency gain in terms of the garbage collection overhead saved as well as the speed of copying a large list. The extra checks in @code{Fsetcar()}, etc. for this that you mention are in fact negligible in their speed overhead -- one or two instructions -- and these functions are not used all that commonly, either. With the changes I have proposed in Architecting XEmacs, the case of returning an internal list will become more and more common as the power of the user interface would be greatly increased and along with it are lots and lots of lists of info that need to be retrievable from Lisp. @end enumerate BTW there is a wonderful book all about garbage collection by Jones and Lins. Ever seen it? @example http://www.amazon.com/exec/obidos/ASIN/0471941484/qid=939775572/sr=1-1/002-3092633-2509405 @end example @node Discussion -- The Anti-Cons, , Discussion -- Hashtable-Based Marking and Cleanup, Discussion -- Garbage Collection @subsection Discussion -- The Anti-Cons @cindex discussion, the anti-cons @cindex the anti-cons, discussion From: "Ben Wing" <ben@@666.com> Date: Tue, 14 May 2002 06:48:09 -0700 i was thinking about the proliferating types of weak hash tables -- e.g. now we have "key-car-value weak" hash tables due to a need in the glyphs code. i realized there should be a general solution, that lets you control exactly how the weakness of such hash tables work. and, assuming we implement a simple "reference" type, a simple container whose object is a weak reference and thus gets converted to nil (and a flag set on the reference) when the object is collected, it would be useful for more precisely controlling the reference, too. it's called an "anti-cons". it behaves somewhat like a cons in that it boxes two items, but its marking properties are very different -- in fact, backwards. normally, a cons, if marked, marks its children. in this case, if the children of an anti-cons are marked, it marks itself! you'd need a few different kinds of anti-cons -- probably the following: @example and [marks itself if both children marked] or [...] left [marks itself if left is marked, and then marks the right] right [...] not-left not-right @end example by putting such an object inside of a weak reference -- e.g. in a weak hash table -- we can set up a tree of arbitrary complexity which implements any boolean formula of markedness over any number of objects. this would easily handle key-car, and key-cadr, and key-car-or-cdr, and key-((caar or cadr) and cdr) etc. etc. implementing this in the current xemacs framework is mostly trivial. michael, would such an object get in the way of your new gc? From: sperber@@informatik.uni-tuebingen.de (Michael Sperber [Mr. Preprocessor]) Date: Tue, 14 May 2002 16:04:01 +0200 You might want to look at http://research.microsoft.com/Users/simonpj/Papers/weak.htm for a pretty comprehensive survey of what you could want in terms of weakness. Its weak pointers are very similar to your anti-cons. However, there are some problems in doing the same in a Lisp settings, mainly because of symbols. I intend to elaborate on this next week; this week is full, unfortunately. Ben> implementing this in the current xemacs framework is mostly Ben> trivial. Ben> michael, would such an object get in the way of your new gc? Well, our first commit will be an implementation of vanilla weak boxes (ready within the next few days, I hope), and we'll then try to replace most other instances of weakness with uses of those. We'll then try to find a more general solution for the rest. (Richard Reingruber has already done a comprehensive survey of the trouble spot. Can you wait until next week? I'll try to come up with a battle plan then. From: sperber@@informatik.uni-tuebingen.de (Michael Sperber [Mr. Preprocessor]) Date: Tue, 28 May 2002 16:14:20 +0200 We've now started implementing ephemerons as a building block for the more involved weakness-involving data structures: The relevant reference is Barry Hayes. Ephemerons: A New Finalization Mechanism. OOPSLA 1997. 176--183 The idea is this: an ephemeron consists of a key and a value. Through the ephemeron, the key is not reachable. The value is only reachable if both the ephemeron is reachable and the key is reachable. If the ephemeron is reachable and the key becomes unreachable, the value slot of the ephemeron will be tombstoned, i.e. overwritten with NIL or something. This allows implementing, AFAICS, the other data structures involving weakness, such as weak hash tables and their various mutants. We're also planning to come up with a more comprehensive solution for finalization, but some design snags remain to be worked out. @node Discussion -- Glyphs, Discussion -- Dialog Boxes, Discussion -- Garbage Collection, Future Work Discussion @section Discussion -- Glyphs @cindex discussion, glyphs @cindex glyphs, discussion Some comments (not always pretty!) by Ben: March 20, 2000 Andy, I use the tab widgets but I've been having lots of problems. 1] Sometimes clicking on them does nothing. 2] There's a design flaw: I frequently use M-C-l to switch to the previous buffer. If I use this in conjunction with the tabs, things get all screwed up because selecting a buffer with the tab does not bring it to the front of the buffer list, like it should. It looks like you're doing this to avoid having the order of the tabs change, but this is wrong: If you don't reorder the buffer list, everything else gets screwed up. If you want the order of the tabs not to change, you need to decouple this order from the buffer list order. March 23, 2000 I'm very confused. The SIGIO timer is used @strong{only} for C-g. It has nothing to do with any other events. (sit-for 0) ought to (1) cause all pending non-command events to get executed, and (b) do redisplay However, sit-for gets preempted by input coming in. What about (sit-for 0.1)? I suppose a solution along the lines of dispatch-non-command-events might be OK if you've tried everything else and it doesn't work, but i'm leery of introducing new Lisp functions to deal with specific problems. Pretty soon we end up with a whole bevy of such ill-defined functions, like we already have. I think instead, you should introduce the following primitive: @example (wait-for-event redisplay &rest event-specs) @end example Waits for one of the event specifications specified to happen. Returns something about what happened. REDISPLAY controls the behavior of redisplay during waiting. Something like @itemize @bullet @item nil (never redisplay), @item t (redisplay when it seems appropriate), etc. @end itemize EVENT-SPECS could be @example t -- drain all non-user events, and then return any-process -- wait till input or state change on any process process -- wait till input or state change on process time -- wait till such-and-such time has elapsed 'user -- wait till user event has happened '(user predicate) -- wait till user event matching the predicate has happened 'event -- wait till any event has happened '(event predicate) -- wait till event matching the predicate has happened @end example The existing functions @code{next-event}, @code{next-command-event}, @code{accept-process-output}, @code{sit-for}, @code{sleep-for}, etc. could all be written in terms of this new command. You could use this command inside of your glyph code to ensure that the events get processed that need do in order for widget updates to happen. But you said something about need a magic event to invoke redisplay? Why is that? April 2, 2000 the internal distinction between "widget" and "layout" is bogus. there exist widgets that do drawing and do layout of their children, e.g. group-box widgets and proper tab widgets. the only sensible distinction is between widgets with children and those without children. April 5, 2000 andy, i'm not sure i really believe that you need to cycle the event code to get widgets to redisplay, but in any case you should @enumerate @item hide the logic to do this in the c code; the lisp code should do nothing other than call (redisplay widget) @item make sure your event-cycling code processes @strong{NO} events at all. this includes non-user events. queue the events instead. @end enumerate in other words, dispatch-non-command-events must go, and i am proposing a general function (redisplay OBJECT) to replace the existing ad-hoc functions. April 6, 2000 the tab widget code should simply be able to create a whole lot of tabs without regard to the size of the gutter, and the surrounding layout widget (please please make layouts be proper widgets!) should automatically map and unmap them as necessary, to fill up the available space. perhaps this already works and what you're doing is just for optimization? but i get the feeling this is not the case. April 6, 2000 the function make-gutter-only-dialog-frame is bogus. the use of the gutter here to hold widgets is an implementation detail and should not be exposed in the interface. similarly, make-search-dialog should not have to do all the futzing that it does. creating the frame unmapped, creating an extent and messing with the gutter: all this stuff should be hidden. you should have a simple function make-dialog-frame that takes a dialog specification, and that's all you need to do. also, these dialog boxes, and this function make-dialog-frame, should @enumerate @item be in @file{dialog.el}, not gutter-items.el. @item when possible, be placed in the interactive spec of standard lisp functions rather than accessed directly from @file{menubar-items.el} @item wrapped in calls to should-use-dialog-box-p, so the user has control over when dialog boxes appear. @end enumerate April 7, 2000 hmmm ... in that case, the whitespace absolutely needs to be specified as properties of the layout widget (e.g. :border-width and :border-height), rather than setting an overall size. you have no idea what the correct size should be if the user changes font size or uses translations in a different language. Your modus operandi should be "hardcoded pixel sizes are @strong{always} bad." April 7, 2000 you mean the number of tabs adjusts, or the size of each tab adjusts (by making the font smaller or something)? if the size of a single tab is not related to the total space the tabs can fix into, then it should be possible to simply specify as many tabs as exist for buffers, and have the layout manager decide how many can fit into the available space. this does @strong{not} mean the layout manager will resize the tabs, because query-geometry on the tabs should find out that the tabs don't want to be any size other than they are. the point here is that you should not @strong{have} to worry about pixel heights and widths @strong{anywhere} in Lisp-level code. The layout managers should take care of everything for you. The only exceptions may be in some text fields, which will be blank by default and you want to specify a maximum width (which should be done in 'n' sizes, not in pixels!). i won't stop complaining until i see nearly every one of those pixel-width and pixel-height parameters gone, and the remaining ones there for a very, very good reason. April 7, 2000 Andy Piper wrote: @example > At 03:51 PM 4/6/00 -0700, Ben Wing wrote: > >[the function make-gutter-only-dialog-frame is bogus] > > The problem is that some of the callbacks and such need access to the > @strong{created} frame, so you end up in a catch 22 unless you do what I've done. @end example [Ben proposes other ways to avoid exposing all the guts, as in @code{make-gutter-only-dialog-frame}:] @enumerate @item Instead of passing in the actual glyph spec or glyph, pass in a function of two args (the dialog frame and its parents), which when called, creates and returns the appropriate glyph. @item [Better] Provide a way for callbacks to determine where they were invoked at. This is much more general and is what you should really do. For example, have the code that calls the callbacks bind some global variables such as widget-callback-current-glyph and widget-callback-current-channel, which contain the glyph whose callback is being invoked, and the window or frame of the glyph (depending on where the glyph is) where the invocation actually happened. That way, the callbacks can easily figure out the dialog box and its parent, and not have to worry about embedding it in at creation time. @end enumerate April 15, 2000 I don't understand when you say "the various types of callback". Are you using the callback for various different purposes? Your widget callbacks should work just like any other callback: they take two arguments, one indicating the object to which the callback was attached (an image instance, i think), and the event that caused the callback to be invoked. April 17, 2000 I am completely vetoing widget-callback-current-channel. How about you create a new keyword, :new-callback, that is a function of two args, like i specified before. btw if you really are calling your callback using call-interactively, why don't you declare a function (interactive "e") and then call event-channel on the resulting event? that should get you the same result as widget-callback-current-channel. the problem with this and everything you've proposed is that there's no way, of course, to get at the actual widget that you were invoked from. would you propose adding widget-callback-current-widget? @node Discussion -- Dialog Boxes, Discussion -- Multilingual Issues, Discussion -- Glyphs, Future Work Discussion @section Discussion -- Dialog Boxes @cindex discussion, dialog boxes @cindex dialog boxes, discussion @example From: Ben Wing <ben@@666.com> 10/7/1999 5:57 PM Subject: Re: Animated gif patch (2) To: Andy Piper <andy@@xemacs.org> CC: xemacs-review@@xemacs.org, xemacs-beta@@xemacs.org The distinction between layouts and widgets makes no sense, so you should combine the different data required. Consider a grouping widget. Is this a layout or a widget? It draws, like a widget, but has children, like a layout. Same for a tab widget, properly implemented. It draws, handles input, has children, and makes choices about how to lay them out. ben From: Ben Wing <ben@@666.com> 9/7/1999 8:50 PM Subject: Re: Layouts done To: Andy Piper <andyp@@beasys.com> this sounds great! where can i see the code? as for user-defined layouts, you must certainly have some sort of abstraction layer for layouts, with DEFINE_LAYOUT_TYPE or something similar just like device types and such. If not, you should certainly make one ... it would have methods such as query-geometry and do-layout. It should be easy to create a user-defined layout if you have such an abstraction. with a user-defined layout, complex built-in layouts such as grid should not be necessary because it's so easy to write snippets of lisp. as for the "redisplay too much" problem, perhaps you could put a dirty flag in each glyph indicating whether it needs to be redisplayed, recalculated, etc.? Andy Piper wrote: > You may want to check them out. I haven't done the user-defined layout > callback - I'm not sure what sort of API this could have. Keywords I've done: > > :orientation - vertical or horizontal > :justify - left, center or right > :border - etch-in, etch-out, bevel-in, bevel -out or text (which gives you > etch-in with a title) > > You can embed any glyph type in a layout. > > There is probably room for improvements for justify to do grid-type layouts > as per java. > > The only annoying thing is that I've hacked up font-lock support to do a > progress gauge in the gutter area. I've used a layout to set things out > correctly. The problem is if you change one of the sub-widgets, the whole > layout gets redisplayed because it is treated as a single glyph by redisplay. > > Oh, and I've done line based scrolling so that glyphs scroll off the page > in units of the average display line height rather than the whole line at > once. This could easily be converted to pixel scrolling but would be very > slow I fear. > > andy > -------------------------------------------------------------- > Dr Andy Piper > Senior Consultant Architect, BEA Systems Ltd From: Ben Wing <ben@@666.com> 8/10/1999 11:11 PM Subject: Re: Widgets To: Andy Piper <andy@@xemacs.org> I think you might have misinterpreted what i meant. I meant to say that XEmacs should implement the @strong{concept} of a hierarchy of nested child "widgets" or "gui items" or whatever we want to call them -- this includes container "widgets" such as grouping widgets (which draw a border around the children, like in Windows), tab widgets, simple layout widgets (invisible, but lay out their children appropriately), etc, plus leaf "widgets" (buttons, sliders, etc., also standard Emacs windows). The layout calculations for these widgets would be handled entirely by XEmacs in a window-system-independent way. There is no need to create a corresponding hierarchy of window-system widgets/controls/whatever if it's not required, and certainly no need to try to use the window-system-supplied geometry management routines. It's absolutely necessary to support this nesting concept in XEmacs, however, or it's impossible to have easily-designable dialog boxes. On the other hand, I think it @strong{is} required to create much of this hierarchy within the actual window system, at the very least for non-invisible container widgets (tab, grouping, etc.), otherwise we will have very bogus, non-native-looking containers like your current tab-widget implementation. It's critical for XEmacs to be able to create dialog boxes in Windows or Motif that look just like those in any other standard application. Otherwise people will continue to think that XEmacs is a backwards-looking, badly implemented piece of software, which in many ways it is, particularly in regards to its user interface. Perhaps we should talk on the phone? This typing is quite hard for me still. What hours are you at work? My hours are approx. 2pm - 2am Pacific time (GMT - 7 hours currently). ben From: Ben Wing <ben@@666.com> 7/21/1999 2:44 AM Subject: Re: Tabs 'n widgets screenshot To: Andy Piper <andy@@xemacs.org> CC: xemacs-beta@@xemacs.org, wmperry@@aventail.com This is real cool, but looking at this, it's clear that it doesn't look the way tab widgets are supposed to work. In particular, of course, they should have the proper borders around the stuff displayed. I've attached a screen shot of a typical Windows dialog box with a tab widget in it. The problem lies with this "expanded gutter" concept. Tabs are @strong{NOT} extra graphical junk placed in the gutters of a buffer but are GUI objects with @strong{children} inside of them. This is the right way to do things, and you would need no extra gutter functionality at all for this. You just need to implement the concept of GUI objects containing other GUI objects within them. One such GUI object needs to be a "Emacs-text" GUI object, which is an Emacs window and contains a buffer within it. At this level, you need not be concerned with the complexities of geometry layout. The only change that needs to be made in the overall strategy of frames, windows, etc. is that windows need not be exactly contiguous and tiled, as long as they are contained within a frame. Or more specifically: Given that you could always split a window contained inside a GUI object, we just need to expand things so that each frame has @strong{multiple} hierarchies of windows in it, rather than just one. A hierarchy of windows can nest inside of another window -- e.g. I put a tab widget or a text widget inside of a buffer. This should be easy to implement -- just change things so there are multiple hierarchies of windows where there are one, each (except the top-level one) being rooted inside some other window. Anyone willing to implement this? Andy? From: Ben Wing <ben@@666.com> 6/30/1999 3:30 PM Subject: Re: Focus Help! To: Andy Piper <andy@@xemacs.org> CC: Ben Wing <ben@@xemacs.org>, martin@@xemacs.org, andyp@@beasys.com It sounds like you're doing very good work. It also sounds like the approach you have followed is the correct one. Now, it seems like there isn't really that much work left to get dialog boxes working. What you really just need to do is implement container widgets, that is to say, subwindows that can contain other subwindows. For example, the tab widget works this way. (It sounds like you have already implemented tab widgets, so I don't quite see how you've done this without the concept of container widgets.) So you might just try adding a framework for container widgets and then implementing very simple container widgets. The basic container widgets are: 1. A vertical-layout widget, which draws nothing itself and lays out its children one above the next. 2. A horizontal-layout widget, which draws nothing itself and lays out its children side-to-side. 3. A box (or "grouping") widget, which draws a rectangle around its single child and optionally draws some text on the top or bottom line of the rectangle. 4. A tab widget, which displays a series of tabs horizontally at the top of its area, and then below it places one of its children, corresponding to the selected tab. 5. A user widget, which draws nothing itself and does no layout at all on its children, except that it has a "layout callback" property, a Lisp function, so that the programmer can control the layout. The framework is as follows: 1. Every widget has at least the following properties: a) a size, whose value can be "unspecified", which might be implemented using the value -1. The default value should be "unspecified". b) whether it's mapped, i.e. whether it will be displayed. (Some container widgets, such as the tab widget, set the mapped property themselves on their children. Others, such as the vertical and horizontal layout widgets, don't change this property but pay attention to it, and ignore completely all children marked as unmapped.) The default value should be "true". c) whether its size can be changed by another widget's layout routine. The default value should be "true". d) a layout procedure, which (potentially at least) determines the size of the widget as well as the position, size and mappedness of its child widgets. The layout procedure is inherent in the widget and is not an external property of the widget (except in the case of the "user widget"): it is instead more like the redisplay callback that each widget has. 2. Every container widget contains a property which is a list of child widgets. 3. Every child widget contains the following properties: a) a position indicating where the child is located relative to the top left corner of its parent. The position's value can be "unspecified", which might be implemented using the value -1. The default value should be "unspecified". b) whether its position can be changed by another widget's layout routine. The default value should be "true". 4. All of the properties just listed (except possibly the layout procedure) can be modified directly by the programmer, and there are no proscriptions against doing so. However, if the programmer wants to resize, reposition, map or unmap a widget in such a way that the layout of all the other widgets in the tree changes appropriately, he should use a special function to change the property, as described below. The redisplay mechanism pays attention to the position, size, and mappedness properties and to the hierarchy of widgets, mapping, resizing and repositioning the corresponding subwindows (the "real representation" of the widgets) as necessary. It also pays attention to the hierarchy of the widgets, making sure that container subwindows get drawn before their child subwindows. When it encounters widgets with an unspecified size, it should not draw them, and should issue a warning. When it encounters widgets with an unspecified position, it should draw them at position (0, 0) and should issue a warning. The above framework should be fairly simple to implement and is basically universal across all high-level windowing system toolkits. The stickyness comes with what procedures you follow for getting the layout done. Andy, I understand that implementing this may seem like a daunting task. Therefore, I propose that at first you implement the above framework but don't implement any of the layout procedures, or any of the functions that call them: Just make them stubs that do nothing. This way, the Lisp programmer can still create any dialog boxes he wants, he just has to set the sizes and positions of all the widgets explicitly, and then recompute them whenever the widget tree is resized (once you get around to allowing this). I have a lot more to write about exactly how the layout procedures work, but I'll send that to you later once you're ready. You should also think about making a way to have widget trees as top-level windows rather than just glyphs in a buffer. There's already the concept of "popup" frames. You could provide an easy way to create a popup frame with no menu, toolbars, scrollbars, modeline or minibuffer, and put a single glyph in the displayed buffer that takes up the whole Emacs window. Ben March 20, 2000 You wrote to me awhile ago about this and asked about documentation, and I dictated a response but never got it sent, so here it is: I don't think there's any more documentation on how things work under Xt but it should be clear. The EmacsFrame widget is the widget corresponding to the X window that Emacs draws into and there is a handler for expose events called from Xt which arranges for the invalidated areas to get redrawn. I think this used to happen as part of the handler itself but now it is delayed until the next call to redisplay. However, one thing that you absolutely must not do is remove the Xt support. This would be an incredibly unfriendly thing to do as it would prevent people from using any widget set other than Qt or GTK. Keep in mind that people run XEmacs on all sorts of different versions of X in Unix, and Xt is the standard and the only toolkit that probably exists on all of these systems. Pardon me if I've misunderstood your intentions w.r.t. this. As for how you would implement GTK support, it will not be very hard to convert redisplay to draw into a GTK window instead of an Xt window. In fact redisplay basically doesn't know about Xt at all, except in the portion that handles updating menubars and scrollbars and stuff that's directly related to Xt. What you'd probably want to do is create a new set of event routines to replace the ones in event-Xt.c. On the display side you could conceivably create a new device type but you probably wouldn't want to do that because it would be an externally visible change at the Lisp level. You might simply want to put a flag on each frame indicating what sort of toolkit the frame was created under and put conditions in the redisplay code and the code to update toolbars and menubars and so forth to test this flag and do the appropriate thing. April 12, 2000 This is way cool, buuuuutttttttt ............. what we @strong{really} need is the GUI interface on top of it. I've taken a shot at it with generic-print-buffer (print-buffer is taken by lpr, which is such a total mess that it needs to be trashed; or at least, the generic stuff in this package needs to be taken out and properly genericized). For the moment, generic-print-buffer just does something like what Kirill's been posting if we're running windows, and uses lpr otherwards. However, what we absofuckinglutely need is a Lisp interface onto @code{EnumPrinters()} so that we can get the list of printers and have a nice menu listing the available printers, and you can check the one you want. People in the Windows world don't normally even know the names of their local printers! Kirill, given what I've done in @file{simple.el} and @file{menubar-items.el}, do you think you could add the @code{EnumPrinters()} support and fix up the GUI? If you don't feel comfortable with the GUI, at least do the @code{EnumPrinters()}. But ... Kirill, I tried your formula for printing and nothing happened. Perhaps I didn't call redisplay-frame or something? You need to fix this up and make it work for multi-page documents. (Again, this is in generic-print-buffer.) Nothing special, it just needs to fucking work! There are zillions and zillions of postings every day on xemacs-nt about how to get printing working, and none seem to refer to the built-in support. ben April 19, 2000 Kirill 'Big K' Katsnelson wrote: > Some time ago, Ben Wing wrote... > >kirill, the interface i created is more general, like this: > > [snip] > > >Unfortunately I haven't implemented much of this; just some of the file > >dialog box. but i think > >this is better than creating new mswindows-specific primitives. if you > >are interested in working on > >this, i'll send you the code i have. > > Sure. Can you just commit it for my starting point? > > >also, the dialogs shouldn't have anything directly to do with the printer > >device. all they should > >do is return a set of values. it's the caller's responsibility to > >interpret them and set device > >properties accordingly. this way, there's a complete separation between > >the underlying > >functionality and the gui. > > Unfortunately. I thought about doing it this way, but we then lose a lot of > printer-specific setup in this case. The DEVMODE structure contains two > parts: printer independent, as defined by SDK typedef DEVMODE, and > some trailing bytes, of unknown structure, used by a driver. The driver > only returns the extra length it wants. Such options as PCL ReT resolution > enhancement options or PostScript negative output are not available > through the standard part of the devmode structure, and stored in the > driver part (printer dialogs are driver-specific). > > So we have total of three options: > - Not to implement options beyond standard DEVMODE > - Make DEVMODE a Lisp object. > - Hide DEVMODE inside the device object. > > First case looks cheesy. Letting DEVMODE fall off the printer is no good > either, since one needs both the device and the devmode to edit the > devmode, and they must match. I am still convinced that the devmode and > the printer should not be separated. hmm, i see ... this completely breaks abstraction though. it fails in various scenarios, e.g. a program wants to initialize the dialog box with certain non-driver-specific properties, without caring about the particular printer. i think you should create a new print-properties object that encapsulates all printer properties (which can be changed using get/put), including the printer name, and contains a DEVMODE in it. if the printer name gets changed, the DEVMODE might change too, but the print-properties object itself stays the same. you pass this object as a parameter to the dialog box, and it gets changed accordingly. you can call something like set-device-print-properties to stick everything in this structure into the device. (you could imagine a case where someone wanted to keep multiple print configurations around ...) > > > Big K -- Ben @end example @node Discussion -- Multilingual Issues, Discussion -- Instantiators and Generic Property Accessors, Discussion -- Dialog Boxes, Future Work Discussion @section Discussion -- Multilingual Issues @cindex discussion, multilingual issues @cindex multilingual issues, discussion @example 4/10/2000 4:13 AM BTW I am planning on adding some more powerful font-mapping capabilities to XEmacs (i.e. how do we map particular characters to the proper fonts that can display them, and how do we map the character's codes to the indices into the font). These will replace to hackish charset-registry/charset-ccl-program stuff we currently have, and be [a] much more powerful, [b] designed in a window-system-independent way, [c] works with specifiers so you can control the mapping of individual buffers, and [d] works on a character rather than charset level, to correctly handle Unicode. One possible usage would be to declare that all latin1 in a particular buffer to be displayed with latin2 fonts; I bet Hrvoje would really appreciate that --------------------------------------------------------------------------- April 10, 2000 [info from "creation of generic macros for accessing internally formatted data"] Hmm, so there I just wrote a detailed design for the macros. I would be @strong{THRILLED} and overjoyed if you went ahead and implemented this mechanism, or parts of it. I've just finished arranging for a new transcriptionist, and soon I should be able to send off and get back my dictation of my (a) exposing streams to lisp, and (b) allowing for proper lisp-created coding systems, which define their reading, writing, and detecting methods in lisp. BTW How's it going wrt your Unicode and decode-priority stuff? And ... you sent me mail asking what it was you had promised me, and listed only one thing, which was profiling of vm and certain other operations you found showed tremendous slowdown with Japanese characters. The other main thing I want from you is -- Your priorities, as an actual Japanese user and XEmacs developer, concerning what MULE work should be done, how it should be done, in what order, etc. I'm sure there's something else, but it's been awhile since I took my sleeping dose and my brain can barely function anymore. Just let me know how you're going to proceed with the above macro changes. BTW there's some nice Perl scripts written by Martin and fixed by me to make global-search-and-replace much, much easier. I've attached them. The first one is a shell script that works like gr foo bar *.[ch] and replaces foo with bar in all of the files. For each modified file, a backup is created in the backup/ directory, which is created as necessary. This shell script is a fairly trivial front end onto global-replace2, which is a perl script that takes one argument (a Perl expression such as s/foo/bar/g) and a list of files obtained by reading the stdin, and does the same global replacement. This means that the regexp syntax used here has to be perl-style rather than standard emacs/grep style. ben --------------------------------------------------------------------- From: Ben Wing <ben@@666.com> 12/23/1999 3:34 AM Subject: Re: check process state before accessing coding_stream (fix PR#1061) To: "Stephen J. Turnbull" <turnbull@@sk.tsukuba.ac.jp> CC: XEmacs Developers <xemacs-beta@@xemacs.org> Thankfully, nearly all of this horridity you bring up is irrelevant. In XEmacs, "gettext" does not refer to any standard API, but is merely a stand-in for a translation routine (presumably written by us). We may as well call it something else. We define our own concept of "current language". We also allow for a function that needs a different version for each language, which handles all cases where simple translation isn't sufficient, e.g. when you have to pluralize some noun given to you or insert the correct form of the definite article. No weird hacks needed. No interaction problems with other pieces of software. What I wrote "awhile ago" is (unfortunately) not anywhere public currently, but it's on my list to put it on the web site. "There you go again" is usually not true; most of what I quote was indeed put out publicly at some point, but I'll try to be more explicit about this in the future. ben "Stephen J. Turnbull" wrote: > >>>>> "Ben" == Ben Wing <ben@@666.com> writes: > > Ben> "Stephen J. Turnbull" wrote: > > >> What I have in mind is not just gettext-izing everything in the > >> XEmacs core sources. I currently believe that to be > >> unacceptable > > Ben> I don't quite understand. Could you elaborate and give some > Ben> examples? > > Examples? Hmm. > > First, there's the surface of Jan's y-or-n-p example. You have to > coordinate the translation of the message string and the response > prompt. This is handled by y-or-n-p itself (I see that we already do > have gettext for Emacs Lisp, that's nice to know). > > Except that it's not really handled by y-or-n-p. There's no reason to > suppose that somebody writing a Lisp package would necessarily use the > XEmacs domain (in fact, due to the way gettext binds text domains---if > I understand that correctly---we don't want that to be the case, > because it means that every time a Lisp package is updated the whole > XEmacs catalog must also be updated). So which domain gets used for > the message string? > > In the current implementation, it is the domain of y-or-n-p. So > packages with their own domain won't get y-or-n-p prompts correctly > translated. But that means that the package should do its own > translation. But now you're applying gettext to the same string > twice; you just have to pray the that translator upstream doesn't > collide with an English string that's in the XEmacs domain. (The > gettext docs mention the similar problem of English words with > multiple meanings that must map to different words in the target > language; this can be disambiguated by various trickeries in forming > the strings ... but only if you "own" them, which in the multi-domain, > interated gettext example you do not.) AFAICT this means that you > must never pass untranslated strings across public APIs, but this may > or may not be reasonable, and certainly is inconvenient. > > Next, we have to translate the possible answer strings to match the > language being passed by the user. This is presumably OK here, > because it's done by y-or-n-p. But what if y-or-n-p returned a string > rather than a boolean? Then we would need to coordinate the > presentation of the prompt (done by y-or-n-p) and the translation of > the possible answer strings (done by the caller). This can in fact be > done using dgettext with the XEmacs domain, but you must know that > y-or-n-p is in the XEmacs domain. This is not necessarily going to be > obvious, and it might very well be that sets of related packages might > have the same domain, so you wouldn't necessarily know which domain is > appropriate by looking at the requires. > > And what happens if one domain does supply translations for a language > and the other does not? AFAIK, gettext has no way to find out if this > is the case. But you might very will prefer a global fallback to > English if substantial phrases are drawn from both domains, while you > might prefer string-by-string fallback if the main text is translated > and only a few words are left to fallback to English. > > Aside from confusing users, this puts a great burden on programmers. > Programmers need to know about the status of the domains of packages > they use as well as the XEmacs domain; they need to program > defensively against the possibility that some package they use will > become gettext-ized, or the translation projects will be out of synch > (some teams will do the calling package first, others will do the > caller package first). > > I don't think anybody will use gettext in these circumstances. At > least not after they get the first bug report that "XEmacs is stuck in > an infinite y-or-n-p loop and I can't get out." > > Ben> I wrote this awhile ago: > > "There you go again." Not anywhere I could see it! (At least, it > doesn't look familiar and grepping the archives doesn't turn it up.) > > OK, you win. Subscribe me to xemacs-review. Or whatever seems > appropriate. > > -- > University of Tsukuba Tennodai 1-1-1 Tsukuba 305-8573 JAPAN > Institute of Policy and Planning Sciences Tel/fax: +81 (298) 53-5091 > _________________ _________________ _________________ _________________ > What are those straight lines for? "XEmacs rules." -- In order to save my hands, I am cutting back on my responses, especially to XEmacs-related mail. You _will_ get a response, but please be patient. If you need an immediate response and it is not apparent in your message, please say so. Thanks for your understanding. -------------------------------------------------------------------- From: Ben Wing <ben@@666.com> 12/21/1999 2:22 AM Subject: Re: check process state before accessing coding_stream (fix PR#1061) To: "Stephen J. Turnbull" <turnbull@@sk.tsukuba.ac.jp> CC: XEmacs Developers <xemacs-beta@@xemacs.org> "Stephen J. Turnbull" wrote: > >>>>> "Ben" == Ben Wing <ben@@666.com> writes: > > Ben> Implementing message translation is not that hard. > > What I have in mind is not just gettext-izing everything in the XEmacs > core sources. I currently believe that to be unacceptable (see Jan's > message for the pitfalls in I18N; it's worse for M17N). I think > really solving this problem needs a specifier-like fallback mechanism > (this would solve Jan's example because you could query the > text-specifier presenting the question for the affirmative and > negative responses, and the catalog-building mechanism would have > checks to make sure they were properly set, perhaps a locale > (language) argument), and gettext is just not sufficient for that. I don't quite understand. Could you elaborate and give some examples? > > > At a minimum, we need to implement gettext for Lisp packages. > (Currently, gettext is only implemented for C AFAIK.) But this could > potentially cuase more trouble than it's worth. > > Ben> A lot depends on priority: How important do you think this > Ben> issue is to your average Japanese/Chinese/etc. user? > > Which average Japanese (etc) user? The English-skilled (relatively) > programmer in the free software movement, or my not-at-all-competent > undergrad students who I would love to have using an Emacs? This is a > really important ease-of-use issue. > > Realistically, for Japanese, it's low priority. The Japanese team in > the GNU Translation Project is doing very little AFAIK, so even if the > capability were there, I doubt the message catalog would soon be done. > > But I think that many non-English speakers would find it very > attractive, and for many languages there are well-organized and > productive translation teams. I suspect that if the I18N facility > were well-designed, many Western European languages would have full > catalogs within a year (granted, they are the ones where it's least > needed :-( ). > > Personally, I think doing it well is hard, and of little benefit to > _current_ core XEmacs constituency. I think doing a good job, with > catalogs, would be very attractive to many non-English-speaking > _potential_ users. > > Ben> How does it compare to some of the other important Mule > Ben> issues that Martin and I are (trying to work) on? > > I don't know what you guys are _trying_ to work on. Everything in the > I18N section of "Architecting XEmacs" is red-flagged. OTOH, it's > clear from your posts that you are overburdened, so I can't read > priority into the fact that you've responded to specific issues in the > past. I wrote this awhile ago: > > Ben> The big question is, would you be willing to help do the > Ben> actual implementation, to "be my hands"? > > Sure, subject to the usual caveat that I'd need to be convinced it's > worth doing and a secondary caveat that I am not an experienced coder. If you'll implement it, I'll design it. It's more a case of will on your part than anything else. I can give you instructions sufficient enough to match your level of expertise. ben > > > -- > University of Tsukuba Tennodai 1-1-1 Tsukuba 305-8573 JAPAN > Institute of Policy and Planning Sciences Tel/fax: +81 (298) 53-5091 > _________________ _________________ _________________ _________________ > What are those straight lines for? "XEmacs rules." -- In order to save my hands, I am cutting back on my responses, especially to XEmacs-related mail. You _will_ get a response, but please be patient. If you need an immediate response and it is not apparent in your message, please say so. Thanks for your understanding. ----------------------------------------------------------------------------- Dec 20, 1999 Implementing message translation is not that hard. I've already done a lot of preliminary work in places such as @file{make-msgfile.lex} in lib-src/. Finishing up the work is not that big a task; I already know exactly how it should be done. Perhaps I'll write up detailed design instructions for this, as I'm doing for other things. A lot depends on priority: How important do you think this issue is to your average Japanese/Chinese/etc. user? How does it compare to some of the other important Mule issues that Martin and I are (trying to work) on? If I did the design document, would you be willing to do the necessary bit of C hackery to implement the document? If the design document is not specific enough for you, I can give you an "implementation document" which will definitely be specific enough: i.e. I'll show you exactly where the code needs to be modified, and how. The big question is, would you be willing to help do the actual implementation, to "be my hands"? --------------------------------------------------------------------------- From: Ben Wing <ben@@666.com> 12/14/1999 11:00 PM Subject: Re: Mule UI disaster: displaying character tables To: Hrvoje Niksic <hniksic@@iskon.hr> CC: XEmacs vs Mule <xemacs-mule@@xemacs.org> What I mean is, please put my name in the header, as well as xemacs-mule. That way I'll see it in my personal box. I agree that Mule has problems, but: Brokenness can be fixed. Slowness can be fixed. Limitations can be fixed. The design limitation you mention below, for example, is not really very hard to change. Keep in mind that I pretty much rewrote Mule from scratch, and did it @strong{all} in 6-7 months. In comparison with that, the changes below are pretty minor, and each could be done by a good (and able-bodied!) programmer familiar with the Mule code in less than a week -- to the XEmacs code, at least. The problem is, everyone who could do this work is instead spending their time complaining about Mule problems instead of doing things. I'll gladly help out anyone who wants to do Mule coding by explaining all the details; I'll even write a "Mule internals manual", if that will help. I can also make international phone calls -- they're cheap here in the US due to the long distance wars. But so far no one has asked me for help or shown any willingness to do any work on Mule. Perhaps people are daunted by the seeming vastness of the problems. But I wager that if I had another 6 months to work on nothing but Mule, it would be nearly perfect. The basic design of the XEmacs C code is good; incremental changes, without over-much concern for compatibility, could make huge strides in a short amount of time (as was the case the whole time I worked on it, esp. towards the end -- it didn't even @strong{compile} for 4 months!). A "total rewrite" would be an incredible waste of time. Again, I'm completely willing to provide help, documentation, design improvement suggestions (ala Architecting XEmacs -- which seems to have been completely ignored, alas), etc. ben Hrvoje Niksic wrote: > Ben Wing <ben@@666.com> writes: > > > I'm the one who did most of the Mule work in XEmacs, so if you have > > any questions about the core, please address them to me directly. I > > can probably give you a very clear and detailed answer. > > Thanks. I think it still makes sense to ask here, so that other > developer have a chance to chime in. > > > However, I need some explanation. What's misdesigned that you're > > complaining about? And what's the coding-system disaster? > > It's been spoken of a lot. Basically: > > * Unlike XEmacs/no-Mule, XEmacs/Mule doesn't preserve binary files in > Latin 2 locales by default. This is annoying for users who are used > to XEmacs/no-Mule. > > * XEmacs/Mule is much slower than XEmacs, and not only because of > character/byte conversions. It seems that font lookups etc. are > slower. > > * The "coding-system disaster" refers to inherent limitations of the > coding-system model. If I understand things correctly, > coding-systems convert streams of bytes to streams of Emchars. It > does not appear to be possible to create a "gzip" coding system for > handling gzipped file. Even EOL conversions look kludgish: > > iso-2022-8 > iso-2022-8-dos > iso-2022-8-mac > iso-2022-8-unix > iso-2022-8bit-ss2 > iso-2022-8bit-ss2-dos > iso-2022-8bit-ss2-mac > iso-2022-8bit-ss2-unix > iso-2022-int-1 > iso-2022-int-1-dos > iso-2022-int-1-mac > iso-2022-int-1-unix > > Ideally, it should be possible to specify a stream of > coding-systems, where only the last one converts to actual Emchars. > > There are more problems I don't remember right now. Many many usage > problems become apparent when I stand and look over the shoulders of > an XEmacs users who tries to use Mule. -- In order to save my hands, I am cutting back on my responses, especially to XEmacs-related mail. You _will_ get a response, but please be patient. If you need an immediate response and it is not apparent in your message, please say so. Thanks for your understanding. ----------------------------------------------------------------------- From: Ben Wing <ben@@666.com> 12/14/1999 12:20 AM Subject: Re: Mule UI disaster: displaying character tables To: "Stephen J. Turnbull" <turnbull@@sk.tsukuba.ac.jp> CC: XEmacs vs Mule <xemacs-mule@@xemacs.org> I think you should go ahead with your proposal, and assume it will get implemented. I don't think Martin is really suggesting that API changes not be allowed, but just that they proceed in a somewhat orderly fashion; and in any case, I imagine I have final say in cases of Mule-related conflicts. ben "Stephen J. Turnbull" wrote: > >>>>> "Hrvoje" == Hrvoje Niksic <hniksic@@iskon.hr> writes: > > Hrvoje> So next I tried the "Mule" menu. That's right, boys and > Hrvoje> girls, I've never looked at it before. > > For quite a while, it didn't work at all, led to crashes and other > warm/fuzzy things. IIRC there used to be a top level menu item > pointing to information about the current language environment but it > got removed. > > Hrvoje> Wow. Seeing shift_jis, iso-2022 variants and (above all > Hrvoje> things) big5 makes me really warm and fuzzy. > > We've been through this recently---you were there. We know what to do > about it, basically (Ben liked my proposal, and it would fix this > silliness as well as the binary file breakage). But given that Ben > and Martin seem to have different ideas about where to go with Mule > (Ben seemed to be supporting API and implementation revisions, Martin > evidently wants to keep the current Mule), working on that proposal is > possibly a waste of time. I've got other stuff on my plate and I'll > get back to it one of these days (not tomorrow but sooner than Real > Soon Now). > > Hrvoje> The items it presents (leading to further submenus) are: > > Hrvoje> 94 character set > Hrvoje> 94 x 94 character set > Hrvoje> 96 character set > > This _is_ bad UI, now that you point it out. But it is quite natural > for a coding system lawyer (as all Japanese users have to be), I never > noticed it before. Easy enough to fix ("raise my karma"). > > Hrvoje> But I do bear some Mule scars, so I happily select "96 > Hrvoje> character sets", then ISO8859-2. And I get this: > > [Table omitted] > > Hrvoje> So me wonders: what the hell is this? > > Huh? That is the standard table that you see over and over again in > references. I'll believe you if you say you've never seen one before, > but every Japanese users' manual has dozens of pages of those, using > exactly that format. > > The presentation in the range 00--7F is not unreasonable for Latin 2; > ISO-8859 is a version of ISO-2022, so the high bit should not be > interpreted as "+ x80" (technically speaking), it should be > interpreted as a character set shift. > > Of course, this doesn't make sense to anybody but a character set > lawyer, and so should be changed. Especially since the header refers > to ISO-8859-2 which everybody these days thinks of as _one, 8-bit_ > character set, not two 7-bit ones. > > As for the "Japanese" in the table, that's just a really stupid > "optimization": those happen to be line-drawing characters available > in JIS X 0208, to make pretty borders. Substitute "-", "+", and "|" > in appropriate places to make ugly but portable borders. > > Hrvoje> Mule is just broken. Warn your friends. > > Hrvoje is on the rampage again. Warn your friends ;-) > > -- > University of Tsukuba Tennodai 1-1-1 Tsukuba 305-8573 JAPAN > Institute of Policy and Planning Sciences Tel/fax: +81 (298) 53-5091 > _________________ _________________ _________________ _________________ > What are those straight lines for? "XEmacs rules." -- In order to save my hands, I am cutting back on my responses, especially to XEmacs-related mail. You _will_ get a response, but please be patient. If you need an immediate response and it is not apparent in your message, please say so. Thanks for your understanding. --------------------------------------------------------------------------- From: Ben Wing <ben@@666.com> 12/14/1999 10:28 PM Subject: Re: Autodetect proposal; specifer questions/suggestions To: "Stephen J. Turnbull" <turnbull@@sk.tsukuba.ac.jp> I've always thought the specifier API is too complicated (and too "write-only"), but I went back at one point well after I designed it and I couldn't figure out an obvious way to simplify it that still kept reasonable functionality. Perhaps that's what Custom did, and why it turned out bad. Inefficiency is a stupid reason not to use them. They seem efficient enough for redisplay. Changing them might be inefficient, but Emacs Lisp is in general, right? Can you propose an API or functionality change that will make them more used? "Stephen J. Turnbull" wrote: > >>>>> "Ben" == Ben Wing <ben@@666.com> writes: > > Ben> I think you should go ahead with your proposal, and assume it > Ben> will get implemented. > > OK. "yas baas" ;-) > > On something totally different. I'm really bothered by the fact that > specifiers are so little used (eg, Custom reimplements them badly), > and the fact that every package seems to define its own set of faces > (or whatever), rather than use the specifier mechanism to inherit from > existing ones, or add new specifications to existing ones. API problem? > > Also, faces (maybe specifiers in general?) should have an autoload > mechanism, and a @file{<package>-faces.el} (or @file{<package>-specifiers.el}) > convention. There are a number of faces in (eg) Custom that I like to > use, but I have to load Custom to get them. And Custom should be able > to somehow see all the faces in various packages available, even when > they are not loaded. > > I've seen claims that specifiers aren't very efficient. > > Opinions? > > -- > University of Tsukuba Tennodai 1-1-1 Tsukuba 305-8573 JAPAN > Institute of Policy and Planning Sciences Tel/fax: +81 (298) 53-5091 > _________________ _________________ _________________ _________________ > What are those straight lines for? "XEmacs rules." -- In order to save my hands, I am cutting back on my responses, especially to XEmacs-related mail. You _will_ get a response, but please be patient. If you need an immediate response and it is not apparent in your message, please say so. Thanks for your understanding. ----------------------------------------------------------------------------- From: Ben Wing <ben@@666.com> 11/18/1999 9:02 PM Subject: Re: Char-related crashes (hopefully) fixed To: "Stephen J. Turnbull" <turnbull@@sk.tsukuba.ac.jp> CC: XEmacs Beta List <xemacs-beta@@xemacs.org> OK, in summation: 1. C-q is a user-level function and should do whatever makes the most sense. 2. int-char is a low-level primitive and should never depend on high-level settings like language environment. 3. Everything you can do with int-char can and should be done with make-char -- representation-independent, much less likelihood of bugs, etc. Therefore int-char should be removed. 4. Note that CLTL2 also removes int-char. 5. Your statement > In one-byte buffers (either Olivier's 1/2/4 extension or `xemacs -font > *-iso8859-2') it implicitly will have dependence whatever you say. is confusing internal and external representations. ben "Stephen J. Turnbull" wrote: > Can somebody give a bunch of examples where using integers as > characters is useful? For that matter, where they are actually used? > Ben said "backward compatibility," but I haven't seen this used, and I > don't really know how to grep for it. I have grepped for int-char, > int-to-char, char-int, and char-to-int and they're pretty rare in the > core and package code (2/3 of it) that I have. > > The only one that I ever use is the C-q hack for inserting characters > by code value at the keyboard, and that could arguably (and in > Japanese invariably is) delegated to an input method which would know > about language environment (and return a true character). > > For iterating over a character set in "natural" order, only ASCII > satisfies the requirement of having one, and even that's shaky. AFAIK > the Swedes and the Norwegians, or is it the Danes, disagree on > ordering the _letters_ in ISO-8859-1 character set. This really > should be table-driven, and will have to be for everything except > ASCII and ISO-8859-1 if we go to a Unicode internal representation. > > We already have primitives for efficient case conversion and the like. > > The only example I can think of offhand where you would really really > want the facility is to iterate over a code space where you don't know > which points are legal characters. Eg, to print out tables of fonts. > Pretty specialized. And this can be done through make-char, anyway. > > According to CLtL1, the main portable use for char-int is for hashing. > But that doesn't square with the kind of usage we've been talking > about (in loops and the like). > > What else am I missing? > > Ben's desiderata have some problems. > > >>>>> "Ben" == Ben Wing <ben@@666.com> writes: > > Ben> Either int-char should be the mirror opposite of char-int > Ben> (i.e. accept all legal char integers), or it should be > Ben> removed entirely. > > OK. I agree with this. > > Ben> int-char should @strong{never} have any dependence on the language > Ben> environment. > > In one-byte buffers (either Olivier's 1/2/4 extension or `xemacs -font > *-iso8859-2') it implicitly will have dependence whatever you say. > Even without Mule, people can always use external encoders to change > raw ISO-8859-2 to ISO-2022 (not that anybody sane ever would, OK, > Hrvoje?). Then the two files will be interpreted differently in a > Latin-1 locale Mule; the ISO-8859-2 file will be recognized as > ISO-8859-1, and the ISO-2022 file will be internally interpreted as > ISO-8859-2. > > The point is that people normally assume that int-char should accept > their "natural" integer to character map. For Americans, that's > ASCII, for Germans, that's ISO-8859-1, for Croatians, that's > ISO-8859-2. And it works "correctly" in a no-mule XEmacs with `-font > *-iso8859-2'! Japanese usually use ku-ten or JIS, and there's a > "natural" map from byte-sized integer pairs to shorts, but it's full > of holes. So language environments don't agree on what a legal char > integer is, and where they do (eg, ISO-8859-1 and ISO-8859-2), they > don't agree on the map. To satisfy your dictum (with which I agree, > but I take to mean we should get rid of these functions) we can take > the intersection where they agree > > ==> legal char integers == ASCII > > which is what I prefer, or pick something arbitrary and efficient > > ==> char-int returns the internal representation > > which I really hate, or something else. Suggestions? > > Ben> I don't think C-q should either. If Hrvoje wants to insert > Ben> Latin-2 characters by number, then make C-u C-q work so that > Ben> it also prompts for a character set, with a default chosen > Ben> from the language environment. > > And restrict this to ASCII? Or assume Latin-1 in GR if there is no > prefix argument? > > This is a useful feature. C-q currently inserts Latin-2 characters > for Hrvoje in no-mule XEmacs (stretching the point only a little); I > think it should continue to do so in Mule. This really is an input > method issue, not a keyboard issue. In XEmacs, inserting an integer > into a buffer has no meaning. Users insert characters. So this is a > completely different issue from the programming API, and should not be > considered analogous. > > Maybe we could have C-q insert according to the Unicode standard, and > treat C-u C-q as part of the input method. But I think most users > would prefer to have C-q insert according to their locale-standard > tables, and select Unicode explicitly using the C-u C-q idiom. In > fact (again this points to the input method idea), Japanese users > would probably like to have the alternatives of using kuten (pairs > from 1--94 x 1--94) or JIS (pairs from 0x21--0x7E x 0x21--0x7E) as > options since both indexing systems are common in tables. > > -- > University of Tsukuba Tennodai 1-1-1 Tsukuba 305-8573 JAPAN > Institute of Policy and Planning Sciences Tel/fax: +81 (298) 53-5091 > __________________________________________________________________________ > __________________________________________________________________________ > What are those two straight lines for? "Free software rules." -- ben -- In order to save my hands, I am cutting back on my responses, especially to XEmacs-related mail. You _will_ get a response, but please be patient. If you need an immediate response and it’s not apparent in your message, please say so. Thanks for your understanding. ----------------------------------------------------------------------------- From: Ben Wing <ben@@666.com> 11/16/1999 11:03 PM Subject: Re: Char-related crashes (hopefully) fixed To: Yoshiki Hayashi <t90553@@m.ecc.u-tokyo.ac.jp> CC: Hrvoje Niksic <hniksic@@iskon.hr>, XEmacs Beta List <xemacs-beta@@xemacs.org> Either int-char should be the mirror opposite of char-int (i.e. accept all legal char integers), or it should be removed entirely. int-char should @strong{never} have any dependence on the language environment. I don't think C-q should either. If Hrvoje wants to insert Latin-2 characters by number, then make C-u C-q work so that it also prompts for a character set, with a default chosen from the language environment. ben Yoshiki Hayashi wrote: > Hrvoje Niksic <hniksic@@iskon.hr> writes: > > > As Ben said, now that we've fixed the actual bugs, we can think about > > changing the behaviour for int-char conversions for 21.2. > > Following are proposed which integers should be accepted > where characters are expected: > > 1) Don't allow anything > 2) Accept 0-127 > 3) Accept 0-256 > 4) Accept everything > > Other things proposed are: > > a) When doing C-q, treat 128-256 as Latin-2 in Latin 2 > language environment. > > So far, most of the proposal is intended to apply to every > int-char conversions, I'd like to make some functions to > accept. > > My plan is: > Accept only 0-256 in every place except int-to-char. > int-to-char accepts every valid integers. > Make new function which does int-to-char conversion > correctly according to the language environment. > > This way, most of the code which does (insert (1+ ?a)) or > something continues working. Now internal representation is > changed a little bit, so disabling > 256 characters will > warn those who are dealing with internal representation > directly, which is bad. Still, you can do > (let ((i 1442)) > (while (i < 2000) > (insert (int-to-char i)) > (setq i (+1 i)))) > to achieve old behaviour. > > For C-q, I'm not for changing it's original definition, > since it might confuse people who are expecting Latin-1 in > other language environment and typing just 1 integer doesn't > make sense for multibyte world. It's cleaner to make new > function, which does make-char according to the charset of > language-info-alist so that people who use that often can > bind it to C-q or some other keys. > > -- > Yoshiki Hayashi -- ben -- In order to save my hands, I am cutting back on my responses, especially to XEmacs-related mail. You _will_ get a response, but please be patient. If you need an immediate response and it’s not apparent in your message, please say so. Thanks for your understanding. @end example @node Discussion -- Instantiators and Generic Property Accessors, Discussion -- Switching to C++, Discussion -- Multilingual Issues, Future Work Discussion @section Discussion -- Instantiators and Generic Property Accessors @cindex discussion, instantiators and generic property accessors @cindex instantiators and generic property accessors, discussion From: Ben Wing <ben@@666.com> Date: Sun, 05 May 2002 05:40:07 -0700 Subject: generic functions, new instantiator API I've been reading the C++ manual and getting polymorphism, inheritance, generic functions, etc. in my head. We have our own "generic function" already in terms of `get', `put', etc. which accept various objects. i'm thinking of extending them so they can accept, as well as objects, lists (either alists or plists) or plist-style vectors, and manipulate their properties. what do people think of this? Also, i'm designing a new API for "instantiators", which are objects whose main purpose is to hold properties and provide a way of notifying their containing specifiers when they change. Instantiator objects are used when the instantiator gets sufficiently complicated that using lists and vectors gets unwieldy -- e.g. when creating widget trees, such as would appear in dialog boxes. you want the ability to programmatically traipse up and down the tree and dynamically modify a part of the tree -- e.g. a property on a single widget -- as necessary, and have the internal code automatically notice this change and performs any necessary updates. lists and vectors are too low-level for this -- no way to get their parent, no way for internal code to be notified when changes occur, can't always maintain object identity when making property changes, no way to error-check illegal changes, etc. You could also extend this api to cover toolbars; it would probably make toolbar manipulation significantly easier. but you'd have to think about backward compatibility in such cases. here is what the api looks like so far -- making use of a newly-added facility for keyword args in primitives. comments are welcome. @example DEFUN ("make-instantiator", Fmake_instantiator, 1, MANY, 0, /* Create a new instantiator object from TYPE and PROPS. TYPE should be one of the image instantiator formats described in `make-glyph'. The rest of the arguments should be keyword properties and associated values, as also described in `make-glyph'. TYPE can also be an old-style vector instantiator. Instantiator objects can be used as instantiators (see `make-specifier') in glyphs in place of old-style vector instantiators. They are especially used for complicated, nested graphical elements such as widgets (buttons, text fields, etc.) -- in fact, widget instantiators will automatically be converted into instantiator objects if they are given in vector format. Individual properties on instantiators can be manipulated using `set-instantiator-property'. If the property's value is a list (for example, a list of children), you can also use `add-instantiator-item' to add or insert individual elements in the list. `delete-instantiator-item' can be used to delete individual items in the list; `get-instantiator-item' to locate individual items in the list; and `get-instantiator-item-position' to return the position of individual items in the list. `map-instantiator' can be used to (recursively or not) map over an instantiator and its children. `find-instantiator' can be used to (recursively or not) locate an instantiator in a tree composed of an instantiator and its descendants. */ /* (type &rest props) */ (int nargs, Lisp_Object *args)) @{ /* ^^#### */ return Qnil; @} DEFUN ("set-instantiator-property", Fset_instantiator_property, 3, 3, 0, /* Set property PROP to VALUE in INSTANTIATOR. INSTANTIATOR should have been created with `make-instantiator'. Valid properties depend on the instantiator type and are described in `make-glyph'. For properties that are lists of items, individual items can be added or deleted using `add-instantiator-item' and `delete-instantiator-item'. For compatibility, this also accepts an old-style vector instantiator, and destructively modifies it; in this case, adding a property requires creating a new vector, which is returned. You need to use `set-glyph-image' on glyphs, or `set-specifier-dirty-flag' on the result of `glyph-image', to register instantiator changes to vector instantiators. (New-style instantiators automatically convey property changes to any glyphs they have been attached to.) */ (instantiator, prop, value)) @{ Lisp_Object *elt; int len; /* ^^#### */ CHECK_VECTOR (instantiator); if (!KEYWORDP (prop)) invalid_argument ("instantiator property must be a keyword", prop); elt = XVECTOR_DATA (instantiator); len = XVECTOR_LENGTH (instantiator); for (len -= 2; len >= 1; len -= 2) @{ if (EQ (elt[len], prop)) @{ elt[len + 1] = value; break; @} @} /* Didn't find it so add it. */ if (len < 1) @{ Lisp_Object alist = Qnil, result; struct gcpro gcpro1; GCPRO1 (alist); alist = tagged_vector_to_alist (instantiator); alist = Fcons (Fcons (prop, value), alist); result = alist_to_tagged_vector (elt[0], alist); free_alist (alist); RETURN_UNGCPRO (result); @} return instantiator; @} DEFUN ("instantiator-property", Finstantiator_property, 2, 3, 0, /* Return the property PROP of INSTANTIATOR, or DEFAULT if PROP has no value. INSTANTIATOR should have been created with `make-instantiator'. */ (instantiator, prop, default_)) @{ /* ^^#### */ return Qnil; @} DEFUN ("instantiator-properties", Finstantiator_properties, 1, 1, 0, /* Return a plist of all defined properties in INSTANTIATOR. INSTANTIATOR should have been created with `make-instantiator'. */ (instantiator)) @{ /* ^^#### */ return Qnil; @} DEFUN ("instantiator-type", Finstantiator_type, 1, 1, 0, /* Return the type of INSTANTIATOR. INSTANTIATOR should have been created with `make-instantiator'. Valid types are the instantiator formats described in `make-glyph'. */ (instantiator)) @{ /* ^^#### */ return Qnil; @} DEFUN ("instantiator-parent", Finstantiator_parent, 1, 1, 0, /* Return the parent of INSTANTIATOR. INSTANTIATOR should have been created with `make-instantiator'. */ (instantiator)) @{ /* ^^#### */ return Qnil; @} DEFUN_WITH_KEYWORDS ("map-instantiator", Fmap_instantiator, 2, 2, 1, 0, 0, /* Map FUN recursively over INSTANTIATOR and its descendants. FUN is called with one argument, the INSTANTIATOR. If:norecurse is non-nil, don't recurse, just map over the direct children (not including the instantiator itself). */ (fun, instantiator), (norecurse)) @{ /* ^^#### */ return Qnil; @} DEFUN_WITH_KEYWORDS ("find-instantiator", Ffind_instantiator, 3, 3, 1, 0, 0, /* Find an instantiator by PROP and VALUE in INSTANTIATOR and its descendants. Returns first item which has PROP set to VALUE. If:norecurse is non-nil, don't recurse, just look through the direct children (not including the instantiator itself). */ (instantiator, prop, value), (norecurse)) @{ /* ^^#### */ return Qnil; @} DEFUN_WITH_KEYWORDS ("add-instantiator-item", Fadd_instantiator_item, 3, 3, 7, 0, 0, /* Add an item to an instantiator property that's a list of items. \(E.g. the children of an instantiator). PROP is the property whose list of items is being modified, and ITEM is the item to add. To insert somewhere before the end, use one of the keywords: --:position specifies a zero-based index of an item, and the new item will be inserted just before the item indicated by the position. Negative numbers count from the end -- thus -1 will cause insertion before the last item, -2 before the second-to-last item, etc. --:before-item and :after-item specify items to insert before or after. :test (defaults to `eq') can be used to specify the way to compare the given item with existing items. --:before-property and :after-property search for an item to insert before or after by looking for an item with the given property. If :value is given, the property must have that value; otherwise, it simply must exist. This method of insertion works if the items in PROP's list are anything that can have or hold properties. \("To have and to hold, for ever and ever ...") This includes: -- any object for which `get' works -- else, if object is a vector, assume it's a plist-style vector -- else, if object is a cons, and its first element is also a cons, assume it's an alist -- else, if object is a cons, assume it's a plist */ (instantiator, prop, item), (position, before_item, after_item, test, before_property, after_property, value)) @{ /* ^^#### */ return Qnil; @} DEFUN_WITH_KEYWORDS ("delete-instantiator-item", Fdelete_instantiator_item, 2, 2, 5, 0, 0, /* Delete an item in an instantiator property that's a list of items. \(E.g. the children of an instantiator). PROP is the property whose list is being searched. One of these keywords should be given: --:position specifies a zero-based index of an item. Negative numbers count from the end -- thus -1 will cause insertion before the last item, -2 before the second-to-last item, etc. --:item specifies the item to delete. :test (defaults to `eq') can be used to specify the way to compare the given item with existing items. --:property searches for an item with the given property. If :value is given, the property must have that value; otherwise, it simply must exist. This method of insertion works if the items in PROP's list are anything that can have or hold properties -- see `add-instantiator-item'. */ (instantiator, prop), (item, test, position, property, value)) @{ /* ^^#### */ return Qnil; @} DEFUN_WITH_KEYWORDS ("get-instantiator-item", Fget_instantiator_item, 2, 2, 3, 0, 0, /* Get an item in an instantiator property that's a list of items. \(E.g. the children of an instantiator). PROP is the property whose list is being searched. One of these keywords should be given: --:position specifies a zero-based index of an item. Negative numbers count from the end -- thus -1 will cause insertion before the last item, -2 before the second-to-last item, etc. --:property searches for an item with the given property. If :value is given, the property must have that value; otherwise, it simply must exist. This method of insertion works if the items in PROP's list are anything that can have or hold properties -- see `add-instantiator-item'. */ (instantiator, prop), (position, property, value)) @{ /* ^^#### */ return Qnil; @} DEFUN_WITH_KEYWORDS ("get-instantiator-item-position", Fget_instantiator_item_position, 2, 2, 4, 0, 0, /* Return an item's position in an instantiator property that's a list of items. \(E.g. the children of an instantiator). PROP is the property whose list is being searched. One of these keywords should be given: --:item specifies the item to search for. :test (defaults to `eq') can be used to specify the way to compare the given item with existing items. --:property searches for an item with the given property. If :value is given, the property must have that value; otherwise, it simply must exist. This method of insertion works if the items in PROP's list are anything that can have or hold properties -- see `add-instantiator-item'. */ (instantiator, prop), (item, test, property, value)) @{ /* ^^#### */ return Qnil; @} DEFUN ("image-instance-instantiator", Fimage_instance_instantiator, 1, 1, 0, /* Return the instantiator from which IMAGE-INSTANCE was created. */ (image_instance)) @{ /* ^^#### */ return Qnil; @} @end example some other useful stuff: @example DEFUN ("make-image-instance", Fmake_image_instance, 1, 4, 0, /* Return a new `image-instance' object. Image-instance objects encapsulate the way a particular glyph (pixmap, widget, etc.) is displayed on a particular device. In most circumstances, you do not need to directly create image instances; instead, you create a glyph using `make-glyph' and add settings (or "instantiators") onto it using `set-glyph-image', and XEmacs creates the image instances as necessary. However, it may occasionally be useful to explicitly create image instances, if you want more control over the instantiation process. For more information on instantiators and instances, see `make-specifier'. DATA is an image instantiator, which describes the image; see `make-glyph' for a description of the allowed values. The most likely circumstance where you need to deal directly with image instances is in widget callbacks -- e.g. the callback that's executed when a button is pressed in a dialog box of type `general' (see `make-dialog-box'). In this case, the widget that was activated is described by an image instance. (The callback is usually be written as an interactive function with an interactive spec of (interactive \"e\"), and a single `event' argument. The event will be an activate event, describing the user action that trigged the callback. The image instance is retrievable from the event using `event-image-instance'. Handling the action may involve setting properties on the image instance or other image instances in the dialog box in which the widget is usually contained -- or changing the instantiator that generated the image instance, if you want permanent changes that will be reflected the next time the dialog box is popped up. Properties on an image instance are set using `set-image-instance-property'. If the widget is part of a hierarchy of widgets (as is usually the case in a dialog box, but may not apply if the widget was inserted by itself in a buffer [by creating a glyph and attaching it to an extent -- see `make-glyph']), there will be a corresponding hierarchy of image instances to describe this particular instance of the dialog box. You can retrieve other image instances in the hierarchy using primitives such as `image-instance-parent', `image-instance-children', and `find-image-instance'. @end example ... @example (defun image-instance-property (image-instance property &optional default) "Return the given property of the given image instance. Returns DEFAULT if the property or the property method do not exist for the image instance in the domain." (check-argument-type 'image-instance-p image-instance) (get image-instance property default)) (defun set-image-instance-property (image-instance prop value) "Set the property PROP on IMAGE-INSTANCE to VALUE. Only certain properties of the image instance can be changed, and they represent \"temporary\" changes. If you want to make permanent changes, you need to change the instantiator that generated the instance -- retrieve the instantiator with `image-instance-instantiator', and change its properties with `set-instantiator-property'. This applies mostly to widgets. For example, you can set a property on a widget image instance to change the state of a radio or checkbox button, set the text currently in an edit field, etc. However, those changes apply only to the *currently* displayed widgets. If these widgets are in a dialog box, and you want to change the way the widgets in the dialog box appear *each* time the dialog box is displayed, you need to change the instantiator. Make sure you understand the difference between instantiators and instances. An \"instantiator\" is a specification, indicating how to determine the value of a setting whose value can vary in different circumstances or \"locales\" (buffers, frames, etc.). An \"instance\" is the resulting value in a particular circumstance. For more information, see `make-specifier'." (check-argument-type 'image-instance-p image-instance) (put image-instance prop value)) @end example From: "Stephen J. Turnbull" <stephen@@xemacs.org> Date: 06 May 2002 16:40:46 +0900 >>>>> "Ben" == Ben Wing <ben@@666.com> writes: Ben> We have our own "generic function" already in terms of `get', Ben> `put', etc. which accept various objects. I proposed extending the class to stuff like charsets about two years ago, and I think you were one of the folks who objected. Ben> i'm thinking of extending them so they can accept lists Ben> (either alists or plists) or plist-style vectors, and Ben> manipulate their properties. what do people think of this? I think extending to lists is something we should approach cautiously. For one thing, if "get" is polymorphic, "put" would have to be too. But how does it decide when dealing with "nil"? Ben> you want the ability to programmatically traipse up and down Ben> the tree and dynamically modify a part of the tree -- e.g. a Ben> property on a single widget -- as necessary, and have the Ben> internal code automatically notice this change and performs Ben> any necessary updates. I like this. From: "Stephen J. Turnbull" <stephen@@xemacs.org> Date: 07 May 2002 11:17:05 +0900 >>>>> "Neal" == Neal D Becker <nbecker@@hns.com> writes: Neal> I thought that generic polymorphism was inherent in lisp, as Neal> it is dynamically evaluated. Why would you need anything Neal> special in the way functions are written to support generic Neal> programming? I think it's basically a technical matter. We have a number of objects that have property lists besides symbols. Many of them have special functions (coding-system-get, coding-system-property, charset-property are examples I find particularly obnoxious). I would like to make these obsolete by allowing `get' on charsets, coding systems, etc. And currently we have (let ((p (symbol-plist symbol))) (plist-get p prop)) Ben would like to allow (let ((p (symbol-plist symbol))) (get p prop)) with `get' determining whether P is a plist or an alist. And where Michael says "why not use hash tables?", I see `(get hash key)' (probably to Michael's horror ;-). This isn't Lisp any more, though, in some sense. But then we haven't been that for years. AFAIK all real Lisps restrict `get' to symbols. From: sperber@@informatik.uni-tuebingen.de (Michael Sperber [Mr. Preprocessor]) Date: Tue, 07 May 2002 08:52:53 +0200 Indeed. I'll just say "goosebumps." But I don't see why it has to be GET that accesses the plist. You just build more dispatch into GET with no immediate benefit to the API. Ad-hoc genericity gets you something when there's some place in the code you don't know what the underlying object is. I don't see this being the case here. Why do you find them "obnoxious?" Stephen> This isn't Lisp any more, though, in some sense. But then we haven't Stephen> been that for years. AFAIK all real Lisps restrict `get' to symbols. Actually, Scheme (which admittedly isn't a real Lisp by many standards) doesn't have get/put at all. And good riddance, I might add:-) From: "Stephen J. Turnbull" <stephen@@xemacs.org> Date: 07 May 2002 20:04:50 +0900 >>>>> "ms" == Michael Sperber <sperber@@informatik.uni-tuebingen.de> writes: Stephen> special functions (coding-system-get, coding-system-property, Stephen> charset-property are examples I find particularly obnoxious). I would Stephen> like to make these obsolete by allowing `get' on charsets, coding Stephen> systems, etc. ms> But I don't see why it has to be GET that accesses the plist. ms> You just build more dispatch into GET with no immediate ms> benefit to the API. Ad-hoc genericity gets you something when ms> there's some place in the code you don't know what the ms> underlying object is. I don't see this being the case here. ms> Why do you find them "obnoxious?" Their semantics are basically `get'. Why not use that name? Of course I agree that it doesn't have to be `get', but why clutter things up? But those are particularly obnoxious because of the object/name confusion they have built in. Ie, my real problem with them is more ancient Mule idiom than the *-get or *-property names for the API. ms> Actually, Scheme (which admittedly isn't a real Lisp by many ms> standards) doesn't have get/put at all. What does it use instead? (And no, you can't bait _me_ with Lisp definition trolls, I think of XML as "declarative LISP with fat, flavored, fuzzy parentheses.") From: sperber@@informatik.uni-tuebingen.de (Michael Sperber [Mr. Preprocessor]) Date: Tue, 07 May 2002 13:26:13 +0200 Stephen> Their semantics are basically `get'. Why not use that name? Because it doesn't convey as much information in the source code as it could, and because it provides less type checking than it could. Stephen> What does it use instead? What for? I've never felt the desire to use them, and it seems to me that in Lisp, properties are usually used for one of two purposes: - As a poor man's replacement for hash tables. - To store data which should really be stored inside the object itself. In the former case, I use a hash table. In the latter case, I store the data in the object itself. @node Discussion -- Switching to C++, Discussion -- Windows External Widget, Discussion -- Instantiators and Generic Property Accessors, Future Work Discussion @section Discussion -- Switching to C++ @cindex discussion, switching to c++ @cindex switching to c++, discussion From: "Ben Wing" <ben@@666.com> Date: Fri, 10 May 2002 19:42:53 -0700 i know i'm opening up a bag of worms by suggesting this, but what about moving to C++? I know others advocate this (Jan, Martin), and the more I read Stroustrup's 3rd edition, the more I realize that *HUGE* armounts of code in XEmacs, and in particular most of the really hairy and hard-to-understand stuff -- lots of weird macros, faux object-oriented stuff implemented in multiple places, each differently (Lisp objects; methods on consoles/devices/etc; specifier sub-types; coding-system sub-types; image-instance device methods; image-instance format methds; etc.), all the GCPROS (which could go entirely), dynarrs, eistring, etc. etc. -- is simply superseded by stuff already built into C++ or supplied by the standard libraries. Just now, I was going through the redisplay code, and noticing the huge amount of duplication between gtk and X, something that's hard to fix [except through super ad-hoc ways like using a .c file as a .h file to "generate" lots of similar but slightly different code] in C, but is extremely easy in C++ using inheritance [and/or templates]. for example, instead of having just one layer of device methods, you'd have @example general -> tty -> windowing -> mswindows -> xlike -> x, gtk @end example which would nicely and naturally encapsulate lots of duplicated [and thus, hard to maintain] code. even more of a win would be the GCPRO's. Taking advantage of constructors and destructors, we could simply do away [COMPLETELY!] with explicitly gcproing, and still have everything gcpro'd. [in fact, much more reliably -- none of the dreaded "temporary" problem, and every reference is always gcpro'd so we have greater flexibility for GC work -- take note, Michael :-) -- e.g. we could safely garbage collect when allocating, and we could even implement a relocating garbage collector. in the few places where performance might be an issue [i seriously doubt there'd be many of them], we simply use a separate Lisp_Object_No_GCPRO class (presumably a base class of Lisp_Object), and manually handle the GCPRO's ourselves. If we needed to distinguish here between static and dynamic objects, or static vs. local vs. heap, we could do so easily with bit flags in the object pointed to -- we have space for lots of them. code reliablity and maintainability would likely substantially increase due to the ability to express most things in a natural C++ way instead of lots of weird hackish hard-to-understand C stuff implementing stuff the language wasn't really designed for. Furthermore, there are even some possibilities for increased speed -- many operations that can only reasonably be done now using Lisp objects (and the associated gc overhead and such) could be done using the high-level built-in facilities of C++, which in their ease of use approach Lisp; and C++ has `inline' built-in, so we could easily add various container classes to improve the understandability of the code without loss of performance. finally, making the "switch" is trivial, since martin did the initial work making XEmacs C++-safe and I've been keeping it that way -- I regularly build under C++ and fix any problems. All we'd need to do is switch the compiler and start gradually introducing C++ constructs as we feel like it. for those concerned that dumping might stop working, [a] i don't think it would, [b] the portable dumper has come of age -- i use it almost all the time, and it's rock-solid and not obviously slower than unexec. the only major concern that i see is the quality of the C++ implementations out there, in particular G++, which is the most widely available. I know that 6 years ago G++ was a bit rocky -- I went to interview for Netscape, and they mentioned having to rely on various vendor implementations of C++, whereas they would have preferred G++ if it was reliable, due to the sameness of environment. But that was *SIX YEARS* ago! Stroustrup 3rd Edition has been out for 5 years now, and it defines, as far as I know, ANSI Standard C++ -- so that's at least 5 years to implement a standard. It's hard to believe that G++ isn't completely reliable now; but I do not have as much experience as others. What do you think? I would *really* like to make this change, as it would immensely facilitate lots of code I'm working on and will be working on, plus of course add all the above benefits once we get around to converting the code. From: "Stephen J. Turnbull" <stephen@@xemacs.org> Date: 11 May 2002 15:34:08 +0900 I don't have a real problem with it, as long as we're very conservative about it, ie, using C++ as "clean C with classes", and introducing things slowly. Implement everything ourselves, avoid the standard class libraries. I've been following the Python lists recently, and although the bias is easy to guess, it's interesting to note that the people who are most anti-C++ are typically the ones who are world-class C++ programmers with big projects under their belts. Many of them actually advocate using C rather than C++. I do worry that with Martin currently out of the picture we don't have an active C++ standards bigot and implementation collector to deal with compiler-specific issues. We do OK with C, but C++ is a much more complex, subtle language. Is there anybody else to plausibly take on that role? From: Hrvoje Niksic <hniksic@@arsdigita.com> Date: Sun, 12 May 2002 20:58:50 +0200 I'm strongly opposed to this. Here are some reasons: * C++ may fix some problems, but it will introduce others, some of which may be much harder to fix. XEmacs is already a large program, hard to understand. C++ will not improve things. * XEmacs will suddenly become uncompilable and unusable in many environments where it used to build perfectly fine -- for example, those that don't ship with a C++ compiler at all. We could "make GCC 3 a requirement", but I don't like that idea. * People without C++ experience will no longer be able to hack XEmacs. I'd be the first one to leave. For example, I know quite a few programmers who don't care for Qt and KDE simply because it's C++. * C is the /lingua franca/ of free software development. If we're switching languages, it should be for a good reason and to something we agree is an improvement (e.g. Common Lisp). * C++ is not the be-all end-all to everything. People who undestand it well are usually the first ones to warn against it. It's possible that they were scarred by using C++ at a bad time, but I'd think twice before discounting their advice or blindly believing that C++ is now all better. If you were writing a new project, I'd say go for it. But at this point, this seems like a needless tweak. Do we really need *more* internal reorganizations? Shouldn't we work on user-visible features? Wasn't that what you yourself advocated when I talked to you? From: Didier Verna <didier@@xemacs.org> Date: Tue, 14 May 2002 11:21:32 +0200 Switching to C++ has been suggested for the first time at the M17n'99 conference in Japan IIRC. Although it was around a table with many empty bottles of beer on it :-), I've kept some hope from that time. I'm happy to see Ben in favor of this today. This idea coming from him is likely to have more impact than when it comes from me or Yan of whoever else. There are several points that make me in favor of this change: - C++ support is already there thanks to Martin. - The amount of OO simulation code written in C in XEmacs is *HUGE*. But more important, this code simulates *BASIC* OO features that are not any more a problem for any C++ compiler. I mean, by just using the basic features of C++ in terms of OO and data abstraction (classes, inheritance, methods (with inlining), operators overloading), we'll win big time in code size, readability, maintainability, and correctness. - the fact that *basic* OO support is already a major gain is very important to me. You don't have to go generic programming with templates everywhere to write an OO XEmacs[1]. Switching to C++ can be completely gradual, and we can even stop early in the C++ features we use. That will already be a big gain. That's also the advantage over the idea of using another more modern language to rewrite the core; something completely unrealistic. - there is another important aspect on the design issue: many people (including from the industry) have worked on abstracting common problems in an OO philosophy. Some people claim that the concepts that emerged from this kind of work of just C++ specific hackery, and they're probably right, but anyway that's obviously not a problem for us. Any C++ writer should have the "Design Patterns" book in hand. It already has good design solutions for many problems that we're facing in XEmacs (like, supporting more than one widget set), because these problems are so *common*. By using C++ we can directly benefit from the experience of other large applications designers. Footnotes: [1] We're working on GP in C++ in our lab here and we trigger bugs in gcc 3. But you should see the code in question, it's pure template and static programming. Things that XEmacs will never need. From: Daniel Pittman <daniel@@rimspace.net> Date: Sat, 11 May 2002 19:15:04 +1000 I've been following the Python lists recently, and although the bias > is easy to guess, it's interesting to note that the people who are > most anti-C++ are typically the ones who are world-class C++ > programmers with big projects under their belts. Many of them > actually advocate using C rather than C++. I wouldn't class myself as "world-class", but I can understand this perspective based on my experiences with large projects that aim for portability to vendor compilers, not just gcc. The biggest problem, assuming that you are willing to ignore platforms like Sinix-PC[1] and their poor compiler support[2] is that it's easy to shoot yourself in the foot with C++. The biggest portability problems are namespaces, the standard C++ library and template support, in about that order, followed by exception handling. Very few things get namespaces right, even today, with gcc being one of the worst. Tempting as they are, they are best avoided where possible, except in compiler and platform specific code.[3] The standard C++ library, which supports RTTI and a few other things including the [io]stream tools, is less than totally reliable although it can be used with relative safety most places. What you really need to watch out for with that is the extensions that every vendor in the universe has added to their collections because there isn't any standard way of doing common things in most of these areas. The Standard Template Library isn't. Aside from a tendency to expose limitations of symbol name lengths[4] the library tends to be unreliable in behavior between compilers and platforms. Not enough to make simple things fail, though, just enough to make it occasionally do odd things or show up obscure bugs in your code... It's also not very well designed, I think, as libraries go. That's a personal opinion, though, and not universal. C++ exceptions are an interesting issue. They can work extremely well as a mechanism for managing errors and improve the reliability of the system. They can also become an unending nightmare of epic proportions, causing more pain and suffering than you can imagine. :/ The main difference between the two situations, so far as I can tell, comes from two aspects of design that have ... far reaching implications. If you try adding exceptions to code that isn't ready to deal with them, things tend to go very badly wrong. I /think/ that the existing exception mechanisms in XEmacs would be similar enough that this isn't the case, though. The other is that you need to base your code very strongly around the "construction acquires, destruction releases" model of resource handling. This, of course, implies using exceptions everywhere because you /can't/ use that model in C++ without them.[5] Again, I think that the existing XEmacs model will probably work well with this, but I am hardly an expert at either; my only real-world experience is the one project where I gained these impressions and the knowledge of the suffering they can bring. :) Oh, and finally, watch out for operator overloading -- including casts. These are very easy to abuse into a position where your code is impossible for others to understand. I would also advocate avoiding multiple inheritance, but that's because my personal design experience says that it's almost always a sign of bad design. Views there vary greatly. > I do worry that with Martin currently out of the picture we don't have > an active C++ standards bigot and implementation collector to deal > with compiler-specific issues. You probably have more need of the second than the first. There are not many things you actually need a standards bigot for; just write good C and don't use too many things other than classes. > We do OK with C, but C++ is a much more complex, subtle language. C with classes, or the limited subset of C++ that doesn't include templates, exceptions or RTTI is not much more complex than standard C. If you add exceptions to that you will probably not notice anything but a syntax change in the core, given their current standing. Er, they probably don't work right in signal handlers, though, because they don't know anything about them.[6] > Is there anybody else to plausibly take on that role? I would be happy to look at things that were publicly discussed on the topic but I don't think I have the experience or the knowledge of the XEmacs development process to do anything more than that. Not, I imagine, that anyone would ask. :) Daniel Footnotes: [1] Archaic Unix ported to i386 from a minicomputer over a decade ago. [2] The vendor C++ would segfault on anything that had multiple inheritance. :) [3] I found them invaluable in resolving a few Win9x vn WinNT symbol conflicts, for example, but that's obviously target-specific. [4] The current record for STL-generated name length that I have seen is a symbol 892 characters long... [5] The lack of a return code from a class constructor is the killer issue here. [6] This, I believe, varies from vendor to vendor. :) @node Discussion -- Windows External Widget, Discussion -- Packages, Discussion -- Switching to C++, Future Work Discussion @section Discussion -- Windows External Widget @cindex discussion, windows external widget @cindex windows external widget, discussion @example Subject: Re: External Widget Support for Xemacs on nt Date: Sat, 08 Jul 2000 01:47:14 -0700 From: Ben Wing <ben@@666.com> To: Timothy.Fowler@@msdw.com CC: xemacs-nt@@xemacs.org References: 1 Nothing is currently done for external widget support under XEmacs but it should not be too hard to do and would be a great addition to XEmacs. What you would probably want to do is create an XEmacs control that has an interface something like the built-in edit control and which communicates to an existing XEmacs process using DDE. (Basically you would modify XEmacs so that it registered itself as a DDE server accepting external widget requests, and then the external edit control would simply send a DDE request and the result would be a handle of some sort used for future communication with that particular XEmacs process.) There are two basic issues in getting the external widget to work, which are display and input. Although I am not completely sure, I have a feeling that it is possible for one process to write into the window of another process, simply by using that window's HWND handle. If so it should be extremely easy to get the output working (this is exactly the approach used under Xt). For input, you would probably again want to do what is done under Xt, which is that the client widget simply passes all of the appropriate messages to the XEmacs server process using whatever communication channel was set up, e.g. DDE, and the XEmacs server processes them normally. Very few modifications would be needed to the XEmacs source code and all of the necessary modifications could be done simply by looking for existing external widget code in XEmacs. If you are interested in continuing this, I will certainly give you any support you need along the way. This would be a great project to be added to XEmacs. Timothy Fowler wrote: > I am looking into external widget support for xemacs nt similar to that > existing in xemacs for X > Have any developement efforts been made in this direction in the past? > Is there any current effort? > Any insight into the complexity of achieving this? > Any comments would be greatly appreciated > Thanks > Tim Fowler -- Ben In order to save my hands, I am cutting back on my mail. I also write as succinctly as possible -- please don't be offended. If you send me mail, you _will_ get a response, but please be patient, especially for XEmacs-related mail. If you need an immediate response and it is not apparent in your message, please say so. Thanks for your understanding. See also http://www.666.com/ben/chronic-pain/ Subject: RE: External Widget Support for Xemacs on nt Date: Mon, 10 Jul 2000 12:40:01 +0100 From: "Alastair J. Houghton" <ajhoughton@@lineone.net> To: "Ben Wing" <ben@@666.com>, <xemacs-nt@@xemacs.org> CC: <Timothy.Fowler@@msdw.com> > -----Original Message----- > From: owner-xemacs-nt@@xemacs.org [mailto:owner-xemacs-nt@@xemacs.org]On > Behalf Of Ben Wing > Sent: 08 July 2000 09:47 > To: Timothy.Fowler@@msdw.com > Cc: xemacs-nt@@xemacs.org > Subject: Re: External Widget Support for Xemacs on nt > > Nothing is currently done for external widget support under > XEmacs but it should > not be too hard to do and would be a great addition to XEmacs. > What you would > probably want to do is create an XEmacs control that has an > interface something > like the built-in edit control and which communicates to an > existing XEmacs > process using DDE. It would be @strong{much} better to use RPC or COM rather than DDE - and also it would provide a more useful interface to XEmacs (like the Microsoft rich text edit control that is used by Wordpad). It would probably also be easier... > If you are interested in continuing this, I will certainly give > you any support > you need along the way. This would be a great project to be added > to XEmacs. I agree. This would be a *really useful* thing to do... Regards, Alastair. ____________________________________________________________ Alastair Houghton ajhoughton@@lineone.net Subject: Re: External Widget Support for Xemacs on nt Date: Mon, 10 Jul 2000 22:56:06 -0700 From: Ben Wing <ben@@666.com> To: "Alastair J. Houghton" <ajhoughton@@lineone.net> CC: xemacs-nt@@xemacs.org, Timothy.Fowler@@msdw.com References: 1 sounds good. i don't know too much about windows ipc methods, so i suggested dde just as an example. "Alastair J. Houghton" wrote: > > -----Original Message----- > > From: owner-xemacs-nt@@xemacs.org [mailto:owner-xemacs-nt@@xemacs.org]On > > Behalf Of Ben Wing > > Sent: 08 July 2000 09:47 > > To: Timothy.Fowler@@msdw.com > > Cc: xemacs-nt@@xemacs.org > > Subject: Re: External Widget Support for Xemacs on nt > > > > Nothing is currently done for external widget support under > > XEmacs but it should > > not be too hard to do and would be a great addition to XEmacs. > > What you would > > probably want to do is create an XEmacs control that has an > > interface something > > like the built-in edit control and which communicates to an > > existing XEmacs > > process using DDE. > > It would be @strong{much} better to use RPC or COM rather than DDE - and > also it would provide a more useful interface to XEmacs (like the > Microsoft rich text edit control that is used by Wordpad). It > would probably also be easier... > > > If you are interested in continuing this, I will certainly give > > you any support > > you need along the way. This would be a great project to be added > > to XEmacs. > > I agree. This would be a *really useful* thing to do... > > Regards, > > Alastair. > > ____________________________________________________________ > Alastair Houghton ajhoughton@@lineone.net -- Ben In order to save my hands, I am cutting back on my mail. I also write as succinctly as possible -- please don't be offended. If you send me mail, you _will_ get a response, but please be patient, especially for XEmacs-related mail. If you need an immediate response and it is not apparent in your message, please say so. Thanks for your understanding. See also http://www.666.com/ben/chronic-pain/ @end example @node Discussion -- Packages, Discussion -- Distribution Layout, Discussion -- Windows External Widget, Future Work Discussion @section Discussion -- Packages @cindex discussion, packages @cindex packages, discussion Author: @uref{mailto:ben@@xemacs.org,Ben Wing} @subheading Important package-related changes This file details changes that make the package system no longer an unmitigated disaster. This way, at the very least, people can essentially ignore the package system and not get bitten horribly the way they currently do. @enumerate @item A single tarball containing absolutely everything and named xemacs-21.2.68.tar.gz. This must contain absolutely everything, including all of the packages, and in the proper directory structure, so that the paradigm for untar; configure; make; make install just works. @item Fixed startup slowdown when all packages are installed so that there is absolutely no penalty to having them all installed. This may be hard. @item All files on the ftp site should be accessible through http. @item Put symlinks into the distribution directory to the appropriate files in the package directory. @item Eliminate the confusing SUMO name, choosing a much more obvious name such as all-packages. @item There should be no separation of mule and non-mule packages. @item Having 2 packages that conflict with each other should be completely disallowed. @item Fix vc and ps-print so that there is only ONE version. @item Fix up all of the READMEs on the distribution site to make it abundantly clear what needs to be obtained, where to get it, and how to install it, especially with regards to packages. @end enumerate @node Discussion -- Distribution Layout, , Discussion -- Packages, Future Work Discussion @section Discussion -- Distribution Layout @cindex discussion, distribution layout @cindex distribution layout, discussion @example From: Ben Wing <ben@@666.com> 10/15/1999 8:50 PM Subject: VOTE: Absolutely necessary changes to file naming in releases To: SL Baur <steve@@xemacs.org>, XEmacs Reviews <xemacs-review@@xemacs.org> Everybody except Steve seems to agree that we need to provide a single tar file containing the entire XEmacs tree whenever we release a new version of XEmacs (beta or not). Therefore I propose the following simple changes, and ask for a vote. If it is the general will of the developers, then Steve @strong{WILL} make these changes. This is the definition of cooperative development -- no one, not even the maintainer, can assert absolute power over anything. I propose (assuming, for example, release 21.2.20): 1. xemacs-21.2.20.tar.gz -> xemacs-21.2.20-core.tar.gz 2. xemacs-sumo.tar.gz -> xemacs-packages.tar.gz 3. xemacs-mule-sumo.tar.gz -> xemacs-mule-packages.tar.gz 4. Symlinks to the files mentioned in #2 and #3 get created in the SAME directory as xemacs-21.2.20-*.tar.gz. 5. MOST IMPORTANTLY, a new file xemacs-21.2.20.tar.gz gets created, which is the combination of the 5 files xemacs-21.2.20-core.tar.gz, xemacs-21.2.20-elc.tar.gz, xemacs-21.2.20-info.tar.gz, xemacs-packages.tar.gz, and xemacs-mule-packages.tar.gz. The directory structure of the new combined file xemacs-21.2.20.tar.gz would look like this: xemacs-21.2.20/ xemacs-packages/ xemacs-mule-packages/ I am sorry to shout, but the current situation is just completely insane. ben From: Ben Wing <ben@@666.com> 10/16/1999 3:12 AM Subject: Re: VOTE: Absolutely necessary changes to file naming in releases To: SL Baur <steve@@xemacs.org>, XEmacs Reviews <xemacs-review@@xemacs.org>, "Michael Sperber [Mr. Preprocessor]" <sperber@@informatik.uni-tuebingen.de> Something went wrong with my mail program while I was responding, so Michael's response is not quoted here. Let me rephrase my proposal, stressing the important points in order of importance: 1. MOST IMPORTANT: There MUST be a SINGLE tar file containing the complete XEmacs sources, packages, etc. The name of this tar file must have a format like this: xemacs-21.2.10.tar.gz The directory layout of the packages within it is not important as long as it works: The user who downloads the tar file MUST be able to apply the 'configure; make; make install' paradigm at the top-level directory and have it work properly. 2. All the pieces of XEmacs must be in the @strong{same} subdirectory on the FTP site. 3. The names need to be obvious and standard. Naming the core files "xemacs-21.2.20.tar.gz" is non-standard because those are only the core files. The standard followed by everybody in the world is that a name like this refers to the entire product, with all ancillary files. Also, "sumo", although a nice in-joke, is extremely confusing and needs to go. Referring to Michael's point about the layout I proposed, I also think that the package system needs to be modified to accept a layout produced by the "obvious" way of obtaining and untarring the parts, which leaves you with a directory consisting of xemacs-21.2.19/ xemacs-packages/ mule-packages/ All at the same level. However, this is an independent issue from the vote at hand. Consider the current insanity. The new XEmacs user or beta tester goes to the FTP site, looks around, finds the file xemacs-21.2.19.tar.gz, and downloads it, because it looks like the obvious one to get. But it doesn't work. Oops ... He looks some more and finds the other two -elc and -info parts, grabs them, and then tries again. But it still doesn't work. He manages to overhear something about packages, so he looks for them, but doesn't find them immediately (they're not even in the beta tree, though they obviously contain beta-level code, especially in xemacs-base and mule-base). Eventually he discovers the package/ subdirectory, but what the hell does he do there? There's no README at all there giving any clues, so he downloads everything. Along with this, he gets some files called "sumo", which he doesn't understand, but he notices that some of them are extremely large. "sumo" ... "large" ... hehe, I get it. Some silly developer's joke. But then he tries again to compile things, and just can't figure things out. He still doesn't know: -- "sumo" is not just some large file, but is a tar file of all the packages. -- The packages can't be placed is any subdirectory in any obvious relation to the XEmacs directory ("straight out of the box" if you manage to grok the significance of the sumo files, you get a layout like xemacs-21.2.19/ xemacs-packages/ mule-packages/ which naturally doesn't work! He needs to put them underneath xemacs-21.2.19/lib/xemacs/ or something.) At this point, he gives up, and (if he was a user of a pre-packagized XEmacs) wonders in despair how things got so messed up, when all older XEmacs releases, including all the betas, followed the standard "configure; make; make install" paradigm). Soooooo ......... PLEASE vote on issues #1-3 above, and add any comments you feel like adding. ben Ben Wing wrote: > Everybody except Steve seems to agree that we need to provide a single > tar file containing the entire XEmacs tree whenever we release a new > version of XEmacs (beta or not). Therefore I propose the following > simple changes, and ask for a vote. If it is the general will of the > developers, then Steve @strong{WILL} make these changes. This is the > definition of cooperative development -- no one, not even the > maintainer, can assert absolute power over anything. > > I propose (assuming, for example, release 21.2.20): > > 1. xemacs-21.2.20.tar.gz -> xemacs-21.2.20-core.tar.gz > > 2. xemacs-sumo.tar.gz -> xemacs-packages.tar.gz > > 3. xemacs-mule-sumo.tar.gz -> xemacs-mule-packages.tar.gz > > 4. Symlinks to the files mentioned in #2 and #3 get created in the SAME > directory as xemacs-21.2.20-*.tar.gz. > > 5. MOST IMPORTANTLY, a new file xemacs-21.2.20.tar.gz gets created, > which is the combination of the 5 files xemacs-21.2.20-core.tar.gz, > xemacs-21.2.20-elc.tar.gz, xemacs-21.2.20-info.tar.gz, > xemacs-packages.tar.gz, and xemacs-mule-packages.tar.gz. > > The directory structure of the new combined file xemacs-21.2.20.tar.gz > would look like this: > > xemacs-21.2.20/ > xemacs-packages/ > xemacs-mule-packages/ > > I am sorry to shout, but the current situation is just completely > insane. > > ben From: Ben Wing <ben@@666.com> 12/6/1999 4:19 AM Subject: Re: Please Vote on Proposals To: Kyle Jones <kyle_jones@@wonderworks.com> CC: XEmacs Review <xemacs-review@@xemacs.org> OK Kyle, how about a different proposal: 1. The distribution consists of the following three parts (let's assume v21.2.25): -- xemacs-21.2.25-core.tar.gz The same as would currently in xemacs-21.2.25.tar.gz. You can run this editor and edit in fundamental mode, but not do anything else. -- xemacs-21.2.25-core-packages.tar.gz A useful and complete subset of all the possible packages. Selection of what goes in and what goes out is based partially on consensus, partially on vote, and partially on these criteria: -- commonly-used packages go in. -- unmaintained or out-of-date packages go out. -- buggy, poorly-written packages go out. -- really obscure packages that hardly anybody could possibly care about go out. -- when there are two or three packages implementing basically the same functionality, pick only one to go in unless there are two that both are really commonly-used. -- if a package can be loaded implicitly as a result of something in the core, it needs to go in, regardless of whether it's been maintained. This applies, for example, to the mode files -- @strong{all} mode packages must go in (or more properly, every mode must have a corresponding package that's in, although if there are two or more packages implementing a particular mode, e.g. html, we are free to choose just one). -- xemacs-21.2.25-aux-packages.tar.gz All of the packages not in the previous file. Generally crappy-quality, poorly-maintained code. Note, we do not make distinctions between Mule and non-Mule in our packaging scheme -- this is a bug and XEmacs and/or the packages should be fixed up so that this goes away. 2. The distribution also contains two combination files: -- xemacs-21.2.25.tar.gz This is the "default" file that a naive user ought to retrieve, and he'll get a running XEmacs, just like he wants, and comfortable, too, because all of the common packages are there. This file is a combination of xemacs-21.2.25-core.tar.gz and xemacs-21.2.25-core-packages.tar.gz. -- xemacs-21.2.25-everything.tar.gz This file contains absolutely everything, like it advertises -- including the aux packages and all of their associated crappy-quality, unmaintained code. This file is a combination of xemacs-21.2.25-core.tar.gz, xemacs-21.2.25-core-packages.tar.gz, and xemacs-21.2.25-aux-packages.tar.gz. I like this proposal better than the previous one I advocated, because it follows your good suggestion of separating the wheat from the chaff in the packages, so to speak. People will grab xemacs-21.2.25.tar.gz by default, just like they should, and they'll get something they're quite happy with, and we're happy because we can exercise quality control over the packages and exclude the crappy ones most likely to cause grief later on. What say y'all? ben Kyle Jones wrote: > Ben Wing writes: > > Disagree. Please let's follow everyone else's convention, and not > > introduce yet another randomness. > > It is not randomness! I think this is a semantic issue and an > important one. The issue is: What do we consider part of XEmacs > and what is considered external to XEmacs. If you put all the > packages in xemacs.tar.gz, then users can reasonably and wrongly > assume that all this random Lisp code is maintained by us. We > are trying to stay away from that model because in the past it has > left us with piles and piles of orphaned code. Even if every one > of us were paid to maintain XEmacs, it is just not practical for > us to continue to maintain all that code, let alone any new code. > So I think the naming distinction Jan is making is worth doing. > > Also, I don't consider the current situation broken, except > perhaps the sumo tarball being out of date. I never, ever, > though it was a great idea to ship all the stuff that XEacs > shipped in the old days. Because this pile of code was always > around in the distribution, an enormous web of undocumented > dependencies was constructed. Eventually, you HAD to install > everything because if you left something out or removed something > you never knew when XEmacs would throw an error. Thus the Cult > of the Cargo was born. > > One of the best things that came out of the package system was > the month or two we spent running XEmacs without all the assorted > Lisp installed. Dependencies were removed or documented, some > stuff got retired, and for the first time we actually had a full > accounting of what we were shipping. I currently run XEmacs with > 7 packages and I don't miss the other stuff. > > Having come this far, I do not think we should go back to > advocating that everyone just install everything and not > think about they are doing. Besides saving space and startup > time, another reason to not install everything is that you > won't bloat your XEmacs process nearly as much if you go > exploring in the Custom menus, because there won't be as much > Lisp loaded as Custom sets up its groups and whatnot. -- In order to save my hands, I am cutting back on my responses, especially to XEmacs-related mail. You _will_ get a response, but please be patient. If you need an immediate response and it is not apparent in your message, please say so. Thanks for your understanding. @end example @node Old Future Work, Index, Future Work Discussion, Top @chapter Old Future Work @cindex old future work @cindex future work, old This chapter includes proposals for future work that were later implemented. These proposals are included because they may describe to some extent the actual workings of the implemented code, and because they may discuss relevant design issues, alternative implementations, or work still to be done. @menu * Old Future Work -- A Portable Unexec Replacement:: * Old Future Work -- Indirect Buffers:: * Old Future Work -- Improvements in support for non-ASCII (European) keysyms under X:: * Old Future Work -- RTF Clipboard Support:: * Old Future Work -- xemacs.org Mailing Address Changes:: * Old Future Work -- Lisp callbacks from critical areas of the C code:: @end menu @node Old Future Work -- A Portable Unexec Replacement, Old Future Work -- Indirect Buffers, Old Future Work, Old Future Work @section Old Future Work -- A Portable Unexec Replacement @cindex old future work, a portable unexec replacement @cindex a portable unexec replacement, old future work Author: @uref{mailto:ben@@xemacs.org,Ben Wing} @strong{Abstract:} Currently, during the build stage of XEmacs, a bare version of the program (called @dfn{temacs}) is run, which loads up a bunch of Lisp data and then writes out a modified executable file. This process is very tricky to implement and highly system-dependent. It can be replaced by a simple, mostly portable, and easy to implement scheme where the Lisp data is written out to a separate data file. The scheme makes only three assumptions about the memory layout of a running XEmacs process, which, as far as I know, are met by all current implementations of XEmacs (and they're also requirements of the existing unexec scheme): @enumerate @item The initialized data segments of the various XEmacs modules are all laid out contiguously in memory and are separated from the initialized data segments of libraries that are linked with XEmacs; likewise for uninitialized data segments. @item The beginning and end of the XEmacs portion of the combined initialized data segment can be programmatically determined; likewise for the uninitialized data segment. @item The XEmacs portion of the initialized and uninitialized data segments are always loaded at the same place in memory. @end enumerate Assumption number three means that this scheme is non-relocatable, which is a disadvantage as compared to other, relocatable schemes that have been proposed. However, the advantage of this scheme over them is that it is much easier to implement and requires minimal changes to the XEmacs code base. First, let's go over the theory behind the dumping mechanism. The principles that we would like to follow are: @enumerate @item We write out to disk all of the data structures and all of their sub-structures that we have created ourselves, except for data that is expected to change from invocation to invocation (in particular, data that is extracted from the external environment at run time). @item We don't write out to disk any data structures created or initialized by system libraries, by the kernel or by any other code that we didn't create ourselves, because we can't count on that code working in the way that we want it to. @item At the beginning of the next invocation of our program, we read in all those data structures that we have written out to disk, and then continue as if we had just created and initialized all of that data ourselves. @item We make sure that our own data structures don't have any pointers to system data, or if they do, that we note all of these pointers so that we can re-create the system data and set up pointers to the data again in the next invocation. @item During the next invocation of our program, we re-create all of our own data structures that are derived from the external environment. @end enumerate XEmacs, of course, is already set up to adhere to most of these principles. In fact, the current dumping process that we are replacing does a few of these principles slightly differently and adds a few extra of its own: @enumerate @item All data structures of all sorts, including system data, are written out. This is the cause of no end of problems, and it is avoidable, because we can ensure that our own data and the system data are physically separated in memory. @item Our own data structures that we derive from the external environment are in fact written out and read in, but then are simply overwritten during the next invocation with new data. Before dumping, we make sure to free any such data structure that would cause memory leaks. @item XEmacs carefully arranges things so that all static variables in the initialized data are never written to after the dumping stage has completed. This allows for an additional optimization in which we can make static initialized data segments in pre-dumped invocations of XEmacs be read-only and shared among all XEmacs processes on a single machine. @end enumerate The difficult part in this process is figuring out where our data structures lie in memory so that we can correctly write them out and read them back in. The trick that we use to make this problem solvable is to ensure that the heap that is used for all dynamically allocated data structures that are created during the dumping process is located inside the memory of a large, statically declared array. This ensures that all of our own data structures are contained (at least at the time that we dump out our data) inside the static initialized and uninitialized data segments, which are physically separated in memory from any data treated by system libraries and whose starting and ending points are known and unchanging (we know that all of these things are true because we require them to be so, as preconditions of being able to make use of this method of dumping). In order to implement this method of heap allocation, we change the memory allocation function that we use for our own data. (It's extremely important that this function not be used to allocate system data. This means that we must not redefine the @code{malloc} function using the linker, but instead we need to achieve this using the C preprocessor, or by simply using a different name, such as @code{xmalloc}. It's also very important that we use the correct @code{free} function when freeing dynamically-allocated data, depending on whether this data was allocated by us or by the @node Old Future Work -- Indirect Buffers, Old Future Work -- Improvements in support for non-ASCII (European) keysyms under X, Old Future Work -- A Portable Unexec Replacement, Old Future Work @section Old Future Work -- Indirect Buffers @cindex old future work, indirect buffers @cindex indirect buffers, old future work Author: @uref{mailto:ben@@xemacs.org,Ben Wing} An indirect buffer is a buffer that shares its text with some other buffer, but has its own version of all of the buffer properties, including markers, extents, buffer local variables, etc. Indirect buffers are not currently implemented in XEmacs, but they are in GNU Emacs, and some people have asked for this feature. I consider this feature somewhat extent-related because much of the work required to implement this feature involves tracking extents properly. In a world with indirect buffers, some buffers are direct, and some buffers are indirect. This only matters when there is more than one buffer sharing the same text. In such a case, one of the buffers can be considered the canonical buffer for the text in question. This buffer is a direct buffer, and all buffers sharing the text are indirect buffers. These two kinds of buffers are created differently. One of them is created simply using the @code{make_buffer()} function (or perhaps the @code{Fget_buffer_create()} function), and the other kind is created using the @code{make_indirect_buffer()} function, which takes another buffer as an argument which specifies the text of the indirect buffer being created. Every indirect buffer keeps track of the direct buffer that is its parent, and every direct buffer keeps a list of all of its indirect buffer children. This list is modified as buffers are created and deleted. Because buffers are permanent objects, there is no special garbage collection-related trickery involved in these parent and children pointers. There should never be an indirect buffer whose parent is also an indirect buffer. If the user attempts to set up such a situation using @code{make_indirect_buffer()}, either an error should be signaled or the parent of the indirect buffer should automatically become the direct buffer that actually is responsible for the text. Deleting a direct buffer should perhaps cause all of the indirect buffer children to be deleted automatically. There should be Lisp functions for determining whether a buffer is direct or indirect, and other functions for retrieving the parents, or the children of the buffer, depending on which is appropriate. (The scheme being described here is similar to symbolic links. Another possible scheme would be analogous to hard links, and would make no distinction between direct and indirect buffers. In that case, the text of the buffer logically exists as an object separate from the buffer itself and only goes away when the last buffer pointing to this text is deleted.) Other than keeping track of parent and child pointer, the only remaining thing required to implement indirect buffers is to ensure that changes to the text of the buffer trigger the same sorts of effect in all the buffers that share that text. Luckily there are only three functions in XEmacs that actually make changes to the text of the buffer, and they are all located in the file @code{insdel.c}. These three functions are called @code{buffer_insert_string_1()}, @code{buffer_delete_range()}, and @code{buffer_replace_char()}. All of the subfunctions called by these functions are also in @code{insdel.c}. The first thing that each of these three functions needs to do is check to see if its buffer argument is an indirect buffer, and if so, convert it to the indirect buffer's parent. Once that is done, the functions need to be modified so that all of the things they do, other than actually changing the buffers text, such as calling before-change-functions and after-change-functions, and updating extents and markers, need to be done over all of the buffers that are indirect children of the buffers being modified; as well as, of course, for the buffer itself. Each step in the process needs to be iterated for all of the buffers in question before proceeding to the next step. For example, in @code{buffer_insert_string_1()}, @code{prepare_to_modify_buffer()} needs to be called in turn, for all of the buffers sharing the text being modified. Then the text itself is modified, then @code{insert_invalidate_line_number_cache()} is called for all of the buffers, then @code{record_insert()} is called for all of the buffers, etc. Essentially, the operation is being done on all of the buffers in parallel, rather than each buffer being processed in series. This is necessary because many of the steps can quit or call Lisp code and each step depends on the previous step, and some steps are done only once, rather than on each buffer. I imagine it would be significantly easier to implement this, if a macro were created for iterating over a buffer, and then all of the indirect children of that buffer. @node Old Future Work -- Improvements in support for non-ASCII (European) keysyms under X, Old Future Work -- RTF Clipboard Support, Old Future Work -- Indirect Buffers, Old Future Work @section Old Future Work -- Improvements in support for non-ASCII (European) keysyms under X @cindex old future work, improvements in support for non-ascii (european) keysyms under x @cindex improvements in support for non-ascii (european) keysyms under x, old future work Author: @uref{mailto:martin@@xemacs.org,Martin Buchholz} If a user has a keyboard with known standard non-ASCII character equivalents, typically for European users, then Emacs' default binding should be self-insert-command, with the obvious character inserted. For example, if a user has a keyboard with xmodmap -e "keycode 54 = scaron" then pressing that key on the keyboard will insert the (Latin-2) character corresponding to "scaron" into the buffer. Note: Emacs 20.6 does NOTHING when pressing such a key (not even an error), i.e. even (read-event) ignores this key, which means it can't even be bound to anything by a user trying to customize it. This is implemented by maintaining a table of translations between all the known X keysym names and the corresponding (charset, octet) pairs. @quotation For every key on the keyboard that has a known character correspondence, we define the ascii-character property of the keysym, and make the default binding for the key be self-insert-command. The following magic is basically intimate knowledge of X11/keysymdef.h. The keysym mappings defined by X11 are based on the iso8859 standards, except for Cyrillic and Greek. In a non-Mule world, a user can still have a multi-lingual editor, by doing (set-face-font "...-iso8859-2" (current-buffer)) for all their Latin-2 buffers, etc. @end quotation @node Old Future Work -- RTF Clipboard Support, Old Future Work -- xemacs.org Mailing Address Changes, Old Future Work -- Improvements in support for non-ASCII (European) keysyms under X, Old Future Work @section Old Future Work -- RTF Clipboard Support @cindex old future work, RTF clipboard support @cindex RTF clipboard support, old future work Author: @uref{mailto:ben@@xemacs.org,Ben Wing} in fact, i merged the windows stuff with the already-existing generic code. what i'd like to see is something like this: @enumerate @item The current function @example (defun own-selection (data &optional type append) @end example should become @example (defun own-selection (data &optional type how-to-add data-type) @end example where data-type is the mswindows format, and how-to-add is @example 'replace-all or nil -- remove data for all formats 'replace-existing -- remove data for DATA-TYPE, but leave other formats alone 'append or t -- append data to existing data in DATA-TYPE, and leave other formats alone @end example @item the function @example (get-selection &optional TYPE DATA-TYPE) @end example already has a data-type so you don't need to change it. @item the existing function @example (selection-exists-p &optional SELECTION DEVICE) @end example should become @example (selection-exists-p &optional SELECTION DEVICE DATA-TYPE) @end example @item a new function @example (register-selection-data-type DATA-TYPE) @end example like your mswindows-register-clipboard-format. @item there's already a selection-converter-alist, but that's only for data out. you should alias it to selection-conversion-out-alist, and create selection-conversion-in-alist. these alists contain entries for CF_TEXT, which handles CR/LF conversion, and rtf, which does rtf in/out conversion -- no need for separate functions to do this. this may seem daunting, but it's much less hard to add stuff like this than it seems, and i and others will certainly give you lots of support if you run into problems. it would be way cool to have a more powerful clipboard mechanism in XEmacs. @end enumerate @node Old Future Work -- xemacs.org Mailing Address Changes, Old Future Work -- Lisp callbacks from critical areas of the C code, Old Future Work -- RTF Clipboard Support, Old Future Work @section Old Future Work -- xemacs.org Mailing Address Changes @cindex old future work, xemacs.org mailing address changes @cindex xemacs.org mailing address changes, old future work Author: @uref{mailto:ben@@xemacs.org,Ben Wing} @subheading Personal addresses @enumerate @item Everyone who is contributing or has ever contributed code to the XEmacs core, or to any of the packages archived at xemacs.org, even if they don't actually have an account on any machine at xemacs.org. In fact, all of these people should have two mailing addresses at xemacs.org, one of which is their actual login name (or potential login name if they were ever to have an account), and the other one is in the form of first name/last name, similar to the way things are done at Sun. For example, Martin would have two addresses at xemacs.org, @code{martin@@xemacs.org}, and @code{martin.buchholz@@xemacs.org}, with the latter one simply being an alias for the former. The idea is that in all cases, if you simply know the name of any past or present contributor to XEmacs, and you want to mail them, you will know immediately how to do this without having to do any complicated searching on the Web or in XEmacs documentation. @item Furthermore, I think that all of the email addresses mentioned anywhere in the XEmacs source code or documentation should be changed to be the corresponding ones at xemacs.org, instead of any other email addresses that any contributors might have. @item All the places in the source code where a contributor's name is mentioned, but no email addressed is attached, should be found, and the correct xemacs.org address should be attached. @item The alias file mapping people's addresses at xemacs.org to their actual addresses elsewhere (in the case, as will be true for the majority of addresses, where the contributor does not actually have an account at xemacs.org, but simply a forwarding pointer), should be viewable on the xemacs.org web site through a CGI script that reads the alias file and turns it into an HTML table. @end enumerate @subheading Package addresses I also think that for every package archived at xemacs.org, there should be three corresponding email addresses at xemacs.org. For example, consider a package such as @code{lazy-shot}. The addresses associated with this package would be: @table @code @item lazy-shot@@xemacs.org This is a discussion mailing list about the @code{lazy-shot} package, and it should be controlled by Majordomo in the standard fashion. @item lazy-shot-patches@@xemacs.org This is where patches to the @code{lazy-shot} package are set. This should go to various people who are interested in such patches. For example, the maintainer of @code{lazy-shot}, perhaps the maintainer of XEmacs itself, and probably to other people who have volunteered to do code review for this package, or for a larger group of packages that this package is in. Perhaps this list should also be maintained by Majordomo. @item lazy-shot-maintainer@@xemacs.org This address is for mailing the maintainer directly. It is possible that this will go to more than one person. This would particularly be the case, for example, if the maintainer is dormant or does not appear very responsive to patches. In this case, the address would also point to someone like Steve, who is acting in the maintainer's stead, and who will himself apply patches or make other changes to the package as maintained in the CVS archive on xemacs.org. @end table It may take a bit of work to track down the current addresses for the various package maintainers, and may in general seem like a lot of work to set up all of these mail addresses, but I think it's very important to make it as easy as possible for random XEmacs users to be able to submit patches and report bugs in an orderly fashion. The general idea that I'm striving for is to create as much momentum as possible in the XEmacs development community, and I think having the system of mail addresses set up will make it much easier for this momentum to be built up and to remain. @uref{../../www.666.com/ben/default.htm,Ben Wing} @node Old Future Work -- Lisp callbacks from critical areas of the C code, , Old Future Work -- xemacs.org Mailing Address Changes, Old Future Work @section Old Future Work -- Lisp callbacks from critical areas of the C code @cindex old future work, lisp callbacks from critical areas of the c code @cindex lisp callbacks from critical areas of the c code, old future work Author: @uref{mailto:ben@@xemacs.org,Ben Wing} There are many places in the XEmacs C code where Lisp functions are called, usually because the Lisp function is acting as a callback, hook, process filter, or the like. The lisp code is often called in places where some lisp operations are dangerous. Currently there are a lot of ad-hoc schemes implemented to try to prevent these dangerous operations from causing problems. I've added a lot of them myself, for example, the @code{call*_trapping_errors()} functions. Other places, such as the pre-gc- and post-gc-hooks, do their own ad hoc processing. I'm proposing a scheme that would generalize all of this ad hoc code and allow Lisp code to be called in all sorts of sensitive areas of the C code, including even within redisplay. Basically, we define a set of operations that are disallowable because they are dangerous. We essentially assign a bit flag to all of these operations. Whenever any sensitive C code wants to call Lisp code, instead of using the standard call* functions, it uses a new set of functions, call*_critical, which takes an extra parameter, which is a bit mask specifying the set of operations which are disallowed. The basic operations of these functions is simply to set a global variable corresponding to the bit mask (more specifically, the functions store the previous value of this global variable in an unwind_protect, and use bitwise-or to combine the previous value with the new bit mask that was passed in). (Actually, we should first implement a slightly lower level function which is called @code{enter_sensitive_code_section()}, which simply sets up the global variable and the @code{unwind_protect()}, and returns a @code{specbind()} value, but doesn't actually call any Lisp code. There is a corresponding function @code{exit_sensitive_code_section()}, which takes the specbind value as an argument, and unwinds the unwind_protect. The call*_sensitive functions are trivially implemented in terms of these lower level functions.) Corresponding to each of these entries is the C name of the bit flag. The sets of dangerous operations which can be prohibited are: @table @code @item OPERATION_GC_PROHIBITED garbage collection. When this flag is set, and the garbage collection threshold is reached, garbage collection simply doesn't happen. It will happen at the next opportunity that it is allowed. Similarly, explicitly calling the Lisp function garbage-collect simply does nothing. @item OPERATION_CATCH_ERRORS signalling an error. When @code{enter_sensitive_code_section()} is called, with the bit flag corresponding to this prohibited operation. When this bit flag is passed to @code{enter_sensitive_code_section()}, a catch is set up which catches all errors, signals a warning with @code{warn_when_safe()}, and then simply continues. This is exactly the same behavior you now get with the @code{call_*_trapping_errors()} functions. (there should also be some way of specifying a warning level and class here, similar to the @code{call_*_trapping_errors()} functions. This is not completely important, however, because a standard warning level and class could simply be chosen.) @item OPERATION_NO_UNSAFE_OBJECT_DELETION This flag prohibits deletion of any permanent object (i.e. any object that does not automatically disappear when created, such as buffers, frames, devices, windows, etc...) unless they were created after this bit flag was set. This would be implemented using a list which stores all of the permanent objects created after this bit flag was set. This list is reset to its previous value when the call to @code{exit_sensitive_code_section()} occurs. The motivation here is to allow Lisp callbacks to create their own temporary buffers or frames, and later delete them, but not allow any other permanent objects to be deleted, because C code might be working with them, and not expect them to change. @item OPERATION_NO_BUFFER_MODIFICATION This flag disallows modifications to the text, extent or any other properties of any buffers except those created after this flag was set, just like in the previous entry. @item OPERATION_NO_REDISPLAY This bit flag inhibits any redisplay-related operations from happening, more specifically, any entry into the redisplay-related code. This includes, for example, the Lisp functions sit-for, force-redisplay, force-cursor-redisplay, window-end with certain arguments to it, and various other functions. When this flag is set, instead of entering the redisplay code, the calling function should simply make sure not to enter the redisplay code, (for example, in the case of window-end), or postpone the redisplay until such a time when it's safe (for example, with sit-for and force-redisplay). @item OPERATION_NO_REDISPLAY_SETTINGS_CHANGE This flag prohibits any modifications to faces, glyphs, specifiers, extents, or any other settings that will affect the way that any window is displayed. @end table The idea here is that it will finally be safe to call Lisp code from nearly any part of the C code, simply by setting any combination of restricted operation bit flags. This even includes from within redisplay. (in such a case, all of the bit flags need to be set). The reason that I thought of this is that some coding system translations might cause Lisp code to be invoked and C code often invokes these translations in sensitive places. @c Indexing guidelines @c I assume that all indexes will be combined. @c Therefore, if a generated findex and permutations @c cover the ways an index user would look up the entry, @c then no cindex is added. @c Concept index (cindex) entries will also be permuted. Therefore, they @c have no commas and few irrelevant connectives in them. @c I tried to include words in a cindex that give the context of the entry, @c particularly if there is more than one entry for the same concept. @c For example, "nil in keymap" @c Similarly for explicit findex and vindex entries, e.g. "print example". @c Error codes are given cindex entries, e.g. "end-of-file error". @c pindex is used for .el files and Unix programs @node Index, , Old Future Work, Top @unnumbered Index @ignore All variables, functions, keys, programs, files, and concepts are in this one index. All names and concepts are permuted, so they appear several times, one for each permutation of the parts of the name. For example, @code{function-name} would appear as @b{function-name} and @b{name, function-}. Key entries are not permuted, however. @end ignore @c Print the indices @printindex fn @c Print the tables of contents @summarycontents @contents @c That's all @bye