Mercurial > hg > xemacs-beta
comparison src/elhash.c @ 665:fdefd0186b75
[xemacs-hg @ 2001-09-20 06:28:42 by ben]
The great integral types renaming.
The purpose of this is to rationalize the names used for various
integral types, so that they match their intended uses and follow
consist conventions, and eliminate types that were not semantically
different from each other.
The conventions are:
-- All integral types that measure quantities of anything are
signed. Some people disagree vociferously with this, but their
arguments are mostly theoretical, and are vastly outweighed by
the practical headaches of mixing signed and unsigned values,
and more importantly by the far increased likelihood of
inadvertent bugs: Because of the broken "viral" nature of
unsigned quantities in C (operations involving mixed
signed/unsigned are done unsigned, when exactly the opposite is
nearly always wanted), even a single error in declaring a
quantity unsigned that should be signed, or even the even more
subtle error of comparing signed and unsigned values and
forgetting the necessary cast, can be catastrophic, as
comparisons will yield wrong results. -Wsign-compare is turned
on specifically to catch this, but this tends to result in a
great number of warnings when mixing signed and unsigned, and
the casts are annoying. More has been written on this
elsewhere.
-- All such quantity types just mentioned boil down to EMACS_INT,
which is 32 bits on 32-bit machines and 64 bits on 64-bit
machines. This is guaranteed to be the same size as Lisp
objects of type `int', and (as far as I can tell) of size_t
(unsigned!) and ssize_t. The only type below that is not an
EMACS_INT is Hashcode, which is an unsigned value of the same
size as EMACS_INT.
-- Type names should be relatively short (no more than 10
characters or so), with the first letter capitalized and no
underscores if they can at all be avoided.
-- "count" == a zero-based measurement of some quantity. Includes
sizes, offsets, and indexes.
-- "bpos" == a one-based measurement of a position in a buffer.
"Charbpos" and "Bytebpos" count text in the buffer, rather than
bytes in memory; thus Bytebpos does not directly correspond to
the memory representation. Use "Membpos" for this.
-- "Char" refers to internal-format characters, not to the C type
"char", which is really a byte.
-- For the actual name changes, see the script below.
I ran the following script to do the conversion. (NOTE: This script
is idempotent. You can safely run it multiple times and it will
not screw up previous results -- in fact, it will do nothing if
nothing has changed. Thus, it can be run repeatedly as necessary
to handle patches coming in from old workspaces, or old branches.)
There are two tags, just before and just after the change:
`pre-integral-type-rename' and `post-integral-type-rename'. When
merging code from the main trunk into a branch, the best thing to
do is first merge up to `pre-integral-type-rename', then apply the
script and associated changes, then merge from
`post-integral-type-change' to the present. (Alternatively, just do
the merging in one operation; but you may then have a lot of
conflicts needing to be resolved by hand.)
Script `fixtypes.sh' follows:
----------------------------------- cut ------------------------------------
files="*.[ch] s/*.h m/*.h config.h.in ../configure.in Makefile.in.in ../lib-src/*.[ch] ../lwlib/*.[ch]"
gr Memory_Count Bytecount $files
gr Lstream_Data_Count Bytecount $files
gr Element_Count Elemcount $files
gr Hash_Code Hashcode $files
gr extcount bytecount $files
gr bufpos charbpos $files
gr bytind bytebpos $files
gr memind membpos $files
gr bufbyte intbyte $files
gr Extcount Bytecount $files
gr Bufpos Charbpos $files
gr Bytind Bytebpos $files
gr Memind Membpos $files
gr Bufbyte Intbyte $files
gr EXTCOUNT BYTECOUNT $files
gr BUFPOS CHARBPOS $files
gr BYTIND BYTEBPOS $files
gr MEMIND MEMBPOS $files
gr BUFBYTE INTBYTE $files
gr MEMORY_COUNT BYTECOUNT $files
gr LSTREAM_DATA_COUNT BYTECOUNT $files
gr ELEMENT_COUNT ELEMCOUNT $files
gr HASH_CODE HASHCODE $files
----------------------------------- cut ------------------------------------
`fixtypes.sh' is a Bourne-shell script; it uses 'gr':
----------------------------------- cut ------------------------------------
#!/bin/sh
# Usage is like this:
# gr FROM TO FILES ...
# globally replace FROM with TO in FILES. FROM and TO are regular expressions.
# backup files are stored in the `backup' directory.
from="$1"
to="$2"
shift 2
echo ${1+"$@"} | xargs global-replace "s/$from/$to/g"
----------------------------------- cut ------------------------------------
`gr' in turn uses a Perl script to do its real work,
`global-replace', which follows:
----------------------------------- cut ------------------------------------
: #-*- Perl -*-
### global-modify --- modify the contents of a file by a Perl expression
## Copyright (C) 1999 Martin Buchholz.
## Copyright (C) 2001 Ben Wing.
## Authors: Martin Buchholz <martin@xemacs.org>, Ben Wing <ben@xemacs.org>
## Maintainer: Ben Wing <ben@xemacs.org>
## Current Version: 1.0, May 5, 2001
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2, or (at your option)
# any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with XEmacs; see the file COPYING. If not, write to the Free
# Software Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA
# 02111-1307, USA.
eval 'exec perl -w -S $0 ${1+"$@"}'
if 0;
use strict;
use FileHandle;
use Carp;
use Getopt::Long;
use File::Basename;
(my $myName = $0) =~ s@.*/@@; my $usage="
Usage: $myName [--help] [--backup-dir=DIR] [--line-mode] [--hunk-mode]
PERLEXPR FILE ...
Globally modify a file, either line by line or in one big hunk.
Typical usage is like this:
[with GNU print, GNU xargs: guaranteed to handle spaces, quotes, etc.
in file names]
find . -name '*.[ch]' -print0 | xargs -0 $0 's/\bCONST\b/const/g'\n
[with non-GNU print, xargs]
find . -name '*.[ch]' -print | xargs $0 's/\bCONST\b/const/g'\n
The file is read in, either line by line (with --line-mode specified)
or in one big hunk (with --hunk-mode specified; it's the default), and
the Perl expression is then evalled with \$_ set to the line or hunk of
text, including the terminating newline if there is one. It should
destructively modify the value there, storing the changed result in \$_.
Files in which any modifications are made are backed up to the directory
specified using --backup-dir, or to `backup' by default. To disable this,
use --backup-dir= with no argument.
Hunk mode is the default because it is MUCH MUCH faster than line-by-line.
Use line-by-line only when it matters, e.g. you want to do a replacement
only once per line (the default without the `g' argument). Conversely,
when using hunk mode, *ALWAYS* use `g'; otherwise, you will only make one
replacement in the entire file!
";
my %options = ();
$Getopt::Long::ignorecase = 0;
&GetOptions (
\%options,
'help', 'backup-dir=s', 'line-mode', 'hunk-mode',
);
die $usage if $options{"help"} or @ARGV <= 1;
my $code = shift;
die $usage if grep (-d || ! -w, @ARGV);
sub SafeOpen {
open ((my $fh = new FileHandle), $_[0]);
confess "Can't open $_[0]: $!" if ! defined $fh;
return $fh;
}
sub SafeClose {
close $_[0] or confess "Can't close $_[0]: $!";
}
sub FileContents {
my $fh = SafeOpen ("< $_[0]");
my $olddollarslash = $/;
local $/ = undef;
my $contents = <$fh>;
$/ = $olddollarslash;
return $contents;
}
sub WriteStringToFile {
my $fh = SafeOpen ("> $_[0]");
binmode $fh;
print $fh $_[1] or confess "$_[0]: $!\n";
SafeClose $fh;
}
foreach my $file (@ARGV) {
my $changed_p = 0;
my $new_contents = "";
if ($options{"line-mode"}) {
my $fh = SafeOpen $file;
while (<$fh>) {
my $save_line = $_;
eval $code;
$changed_p = 1 if $save_line ne $_;
$new_contents .= $_;
}
} else {
my $orig_contents = $_ = FileContents $file;
eval $code;
if ($_ ne $orig_contents) {
$changed_p = 1;
$new_contents = $_;
}
}
if ($changed_p) {
my $backdir = $options{"backup-dir"};
$backdir = "backup" if !defined ($backdir);
if ($backdir) {
my ($name, $path, $suffix) = fileparse ($file, "");
my $backfulldir = $path . $backdir;
my $backfile = "$backfulldir/$name";
mkdir $backfulldir, 0755 unless -d $backfulldir;
print "modifying $file (original saved in $backfile)\n";
rename $file, $backfile;
}
WriteStringToFile ($file, $new_contents);
}
}
----------------------------------- cut ------------------------------------
In addition to those programs, I needed to fix up a few other
things, particularly relating to the duplicate definitions of
types, now that some types merged with others. Specifically:
1. in lisp.h, removed duplicate declarations of Bytecount. The
changed code should now look like this: (In each code snippet
below, the first and last lines are the same as the original, as
are all lines outside of those lines. That allows you to locate
the section to be replaced, and replace the stuff in that
section, verifying that there isn't anything new added that
would need to be kept.)
--------------------------------- snip -------------------------------------
/* Counts of bytes or chars */
typedef EMACS_INT Bytecount;
typedef EMACS_INT Charcount;
/* Counts of elements */
typedef EMACS_INT Elemcount;
/* Hash codes */
typedef unsigned long Hashcode;
/* ------------------------ dynamic arrays ------------------- */
--------------------------------- snip -------------------------------------
2. in lstream.h, removed duplicate declaration of Bytecount.
Rewrote the comment about this type. The changed code should
now look like this:
--------------------------------- snip -------------------------------------
#endif
/* The have been some arguments over the what the type should be that
specifies a count of bytes in a data block to be written out or read in,
using Lstream_read(), Lstream_write(), and related functions.
Originally it was long, which worked fine; Martin "corrected" these to
size_t and ssize_t on the grounds that this is theoretically cleaner and
is in keeping with the C standards. Unfortunately, this practice is
horribly error-prone due to design flaws in the way that mixed
signed/unsigned arithmetic happens. In fact, by doing this change,
Martin introduced a subtle but fatal error that caused the operation of
sending large mail messages to the SMTP server under Windows to fail.
By putting all values back to be signed, avoiding any signed/unsigned
mixing, the bug immediately went away. The type then in use was
Lstream_Data_Count, so that it be reverted cleanly if a vote came to
that. Now it is Bytecount.
Some earlier comments about why the type must be signed: This MUST BE
SIGNED, since it also is used in functions that return the number of
bytes actually read to or written from in an operation, and these
functions can return -1 to signal error.
Note that the standard Unix read() and write() functions define the
count going in as a size_t, which is UNSIGNED, and the count going
out as an ssize_t, which is SIGNED. This is a horrible design
flaw. Not only is it highly likely to lead to logic errors when a
-1 gets interpreted as a large positive number, but operations are
bound to fail in all sorts of horrible ways when a number in the
upper-half of the size_t range is passed in -- this number is
unrepresentable as an ssize_t, so code that checks to see how many
bytes are actually written (which is mandatory if you are dealing
with certain types of devices) will get completely screwed up.
--ben
*/
typedef enum lstream_buffering
--------------------------------- snip -------------------------------------
3. in dumper.c, there are four places, all inside of switch()
statements, where XD_BYTECOUNT appears twice as a case tag. In
each case, the two case blocks contain identical code, and you
should *REMOVE THE SECOND* and leave the first.
author | ben |
---|---|
date | Thu, 20 Sep 2001 06:31:11 +0000 |
parents | b39c14581166 |
children | 943eaba38521 |
comparison
equal
deleted
inserted
replaced
664:6e99cc8c6ca5 | 665:fdefd0186b75 |
---|---|
80 } hentry; | 80 } hentry; |
81 | 81 |
82 struct Lisp_Hash_Table | 82 struct Lisp_Hash_Table |
83 { | 83 { |
84 struct lcrecord_header header; | 84 struct lcrecord_header header; |
85 Element_Count size; | 85 Elemcount size; |
86 Element_Count count; | 86 Elemcount count; |
87 Element_Count rehash_count; | 87 Elemcount rehash_count; |
88 double rehash_size; | 88 double rehash_size; |
89 double rehash_threshold; | 89 double rehash_threshold; |
90 Element_Count golden_ratio; | 90 Elemcount golden_ratio; |
91 hash_table_hash_function_t hash_function; | 91 hash_table_hash_function_t hash_function; |
92 hash_table_test_function_t test_function; | 92 hash_table_test_function_t test_function; |
93 hentry *hentries; | 93 hentry *hentries; |
94 enum hash_table_weakness weakness; | 94 enum hash_table_weakness weakness; |
95 Lisp_Object next_weak; /* Used to chain together all of the weak | 95 Lisp_Object next_weak; /* Used to chain together all of the weak |
103 | 103 |
104 #define HASH_TABLE_DEFAULT_SIZE 16 | 104 #define HASH_TABLE_DEFAULT_SIZE 16 |
105 #define HASH_TABLE_DEFAULT_REHASH_SIZE 1.3 | 105 #define HASH_TABLE_DEFAULT_REHASH_SIZE 1.3 |
106 #define HASH_TABLE_MIN_SIZE 10 | 106 #define HASH_TABLE_MIN_SIZE 10 |
107 | 107 |
108 #define HASH_CODE(key, ht) \ | 108 #define HASHCODE(key, ht) \ |
109 ((((ht)->hash_function ? (ht)->hash_function (key) : LISP_HASH (key)) \ | 109 ((((ht)->hash_function ? (ht)->hash_function (key) : LISP_HASH (key)) \ |
110 * (ht)->golden_ratio) \ | 110 * (ht)->golden_ratio) \ |
111 % (ht)->size) | 111 % (ht)->size) |
112 | 112 |
113 #define KEYS_EQUAL_P(key1, key2, testfun) \ | 113 #define KEYS_EQUAL_P(key1, key2, testfun) \ |
141 #else | 141 #else |
142 #define check_hash_table_invariants(ht) | 142 #define check_hash_table_invariants(ht) |
143 #endif | 143 #endif |
144 | 144 |
145 /* Return a suitable size for a hash table, with at least SIZE slots. */ | 145 /* Return a suitable size for a hash table, with at least SIZE slots. */ |
146 static Element_Count | 146 static Elemcount |
147 hash_table_size (Element_Count requested_size) | 147 hash_table_size (Elemcount requested_size) |
148 { | 148 { |
149 /* Return some prime near, but greater than or equal to, SIZE. | 149 /* Return some prime near, but greater than or equal to, SIZE. |
150 Decades from the time of writing, someone will have a system large | 150 Decades from the time of writing, someone will have a system large |
151 enough that the list below will be too short... */ | 151 enough that the list below will be too short... */ |
152 static const Element_Count primes [] = | 152 static const Elemcount primes [] = |
153 { | 153 { |
154 19, 29, 41, 59, 79, 107, 149, 197, 263, 347, 457, 599, 787, 1031, | 154 19, 29, 41, 59, 79, 107, 149, 197, 263, 347, 457, 599, 787, 1031, |
155 1361, 1777, 2333, 3037, 3967, 5167, 6719, 8737, 11369, 14783, | 155 1361, 1777, 2333, 3037, 3967, 5167, 6719, 8737, 11369, 14783, |
156 19219, 24989, 32491, 42257, 54941, 71429, 92861, 120721, 156941, | 156 19219, 24989, 32491, 42257, 54941, 71429, 92861, 120721, 156941, |
157 204047, 265271, 344857, 448321, 582821, 757693, 985003, 1280519, | 157 204047, 265271, 344857, 448321, 582821, 757693, 985003, 1280519, |
186 /* This is wrong anyway. You can't use strcmp() on Lisp strings, | 186 /* This is wrong anyway. You can't use strcmp() on Lisp strings, |
187 because they can contain zero characters. */ | 187 because they can contain zero characters. */ |
188 return !strcmp ((char *) XSTRING_DATA (str1), (char *) XSTRING_DATA (str2)); | 188 return !strcmp ((char *) XSTRING_DATA (str1), (char *) XSTRING_DATA (str2)); |
189 } | 189 } |
190 | 190 |
191 static Hash_Code | 191 static Hashcode |
192 lisp_string_hash (Lisp_Object obj) | 192 lisp_string_hash (Lisp_Object obj) |
193 { | 193 { |
194 return hash_string (XSTRING_DATA (str), XSTRING_LENGTH (str)); | 194 return hash_string (XSTRING_DATA (str), XSTRING_LENGTH (str)); |
195 } | 195 } |
196 | 196 |
200 lisp_object_eql_equal (Lisp_Object obj1, Lisp_Object obj2) | 200 lisp_object_eql_equal (Lisp_Object obj1, Lisp_Object obj2) |
201 { | 201 { |
202 return EQ (obj1, obj2) || (FLOATP (obj1) && internal_equal (obj1, obj2, 0)); | 202 return EQ (obj1, obj2) || (FLOATP (obj1) && internal_equal (obj1, obj2, 0)); |
203 } | 203 } |
204 | 204 |
205 static Hash_Code | 205 static Hashcode |
206 lisp_object_eql_hash (Lisp_Object obj) | 206 lisp_object_eql_hash (Lisp_Object obj) |
207 { | 207 { |
208 return FLOATP (obj) ? internal_hash (obj, 0) : LISP_HASH (obj); | 208 return FLOATP (obj) ? internal_hash (obj, 0) : LISP_HASH (obj); |
209 } | 209 } |
210 | 210 |
212 lisp_object_equal_equal (Lisp_Object obj1, Lisp_Object obj2) | 212 lisp_object_equal_equal (Lisp_Object obj1, Lisp_Object obj2) |
213 { | 213 { |
214 return internal_equal (obj1, obj2, 0); | 214 return internal_equal (obj1, obj2, 0); |
215 } | 215 } |
216 | 216 |
217 static Hash_Code | 217 static Hashcode |
218 lisp_object_equal_hash (Lisp_Object obj) | 218 lisp_object_equal_hash (Lisp_Object obj) |
219 { | 219 { |
220 return internal_hash (obj, 0); | 220 return internal_hash (obj, 0); |
221 } | 221 } |
222 | 222 |
281 } | 281 } |
282 | 282 |
283 /* This is not a great hash function, but it _is_ correct and fast. | 283 /* This is not a great hash function, but it _is_ correct and fast. |
284 Examining all entries is too expensive, and examining a random | 284 Examining all entries is too expensive, and examining a random |
285 subset does not yield a correct hash function. */ | 285 subset does not yield a correct hash function. */ |
286 static Hash_Code | 286 static Hashcode |
287 hash_table_hash (Lisp_Object hash_table, int depth) | 287 hash_table_hash (Lisp_Object hash_table, int depth) |
288 { | 288 { |
289 return XHASH_TABLE (hash_table)->count; | 289 return XHASH_TABLE (hash_table)->count; |
290 } | 290 } |
291 | 291 |
432 sizeof (hentry), | 432 sizeof (hentry), |
433 hentry_description_1 | 433 hentry_description_1 |
434 }; | 434 }; |
435 | 435 |
436 const struct lrecord_description hash_table_description[] = { | 436 const struct lrecord_description hash_table_description[] = { |
437 { XD_ELEMENT_COUNT, offsetof (Lisp_Hash_Table, size) }, | 437 { XD_ELEMCOUNT, offsetof (Lisp_Hash_Table, size) }, |
438 { XD_STRUCT_PTR, offsetof (Lisp_Hash_Table, hentries), XD_INDIRECT(0, 1), &hentry_description }, | 438 { XD_STRUCT_PTR, offsetof (Lisp_Hash_Table, hentries), XD_INDIRECT(0, 1), &hentry_description }, |
439 { XD_LO_LINK, offsetof (Lisp_Hash_Table, next_weak) }, | 439 { XD_LO_LINK, offsetof (Lisp_Hash_Table, next_weak) }, |
440 { XD_END } | 440 { XD_END } |
441 }; | 441 }; |
442 | 442 |
463 | 463 |
464 /* Creation of hash tables, without error-checking. */ | 464 /* Creation of hash tables, without error-checking. */ |
465 static void | 465 static void |
466 compute_hash_table_derived_values (Lisp_Hash_Table *ht) | 466 compute_hash_table_derived_values (Lisp_Hash_Table *ht) |
467 { | 467 { |
468 ht->rehash_count = (Element_Count) | 468 ht->rehash_count = (Elemcount) |
469 ((double) ht->size * ht->rehash_threshold); | 469 ((double) ht->size * ht->rehash_threshold); |
470 ht->golden_ratio = (Element_Count) | 470 ht->golden_ratio = (Elemcount) |
471 ((double) ht->size * (.6180339887 / (double) sizeof (Lisp_Object))); | 471 ((double) ht->size * (.6180339887 / (double) sizeof (Lisp_Object))); |
472 } | 472 } |
473 | 473 |
474 Lisp_Object | 474 Lisp_Object |
475 make_standard_lisp_hash_table (enum hash_table_test test, | 475 make_standard_lisp_hash_table (enum hash_table_test test, |
476 Element_Count size, | 476 Elemcount size, |
477 double rehash_size, | 477 double rehash_size, |
478 double rehash_threshold, | 478 double rehash_threshold, |
479 enum hash_table_weakness weakness) | 479 enum hash_table_weakness weakness) |
480 { | 480 { |
481 hash_table_hash_function_t hash_function = 0; | 481 hash_table_hash_function_t hash_function = 0; |
508 } | 508 } |
509 | 509 |
510 Lisp_Object | 510 Lisp_Object |
511 make_general_lisp_hash_table (hash_table_hash_function_t hash_function, | 511 make_general_lisp_hash_table (hash_table_hash_function_t hash_function, |
512 hash_table_test_function_t test_function, | 512 hash_table_test_function_t test_function, |
513 Element_Count size, | 513 Elemcount size, |
514 double rehash_size, | 514 double rehash_size, |
515 double rehash_threshold, | 515 double rehash_threshold, |
516 enum hash_table_weakness weakness) | 516 enum hash_table_weakness weakness) |
517 { | 517 { |
518 Lisp_Object hash_table; | 518 Lisp_Object hash_table; |
529 rehash_threshold > 0.0 ? rehash_threshold : | 529 rehash_threshold > 0.0 ? rehash_threshold : |
530 size > 4096 && !ht->test_function ? 0.7 : 0.6; | 530 size > 4096 && !ht->test_function ? 0.7 : 0.6; |
531 | 531 |
532 if (size < HASH_TABLE_MIN_SIZE) | 532 if (size < HASH_TABLE_MIN_SIZE) |
533 size = HASH_TABLE_MIN_SIZE; | 533 size = HASH_TABLE_MIN_SIZE; |
534 ht->size = hash_table_size ((Element_Count) (((double) size / ht->rehash_threshold) | 534 ht->size = hash_table_size ((Elemcount) (((double) size / ht->rehash_threshold) |
535 + 1.0)); | 535 + 1.0)); |
536 ht->count = 0; | 536 ht->count = 0; |
537 | 537 |
538 compute_hash_table_derived_values (ht); | 538 compute_hash_table_derived_values (ht); |
539 | 539 |
549 | 549 |
550 return hash_table; | 550 return hash_table; |
551 } | 551 } |
552 | 552 |
553 Lisp_Object | 553 Lisp_Object |
554 make_lisp_hash_table (Element_Count size, | 554 make_lisp_hash_table (Elemcount size, |
555 enum hash_table_weakness weakness, | 555 enum hash_table_weakness weakness, |
556 enum hash_table_test test) | 556 enum hash_table_test test) |
557 { | 557 { |
558 return make_standard_lisp_hash_table (test, size, -1.0, -1.0, weakness); | 558 return make_standard_lisp_hash_table (test, size, -1.0, -1.0, weakness); |
559 } | 559 } |
579 maybe_signal_error_1 (Qwrong_type_argument, list2 (Qnatnump, value), | 579 maybe_signal_error_1 (Qwrong_type_argument, list2 (Qnatnump, value), |
580 Qhash_table, errb); | 580 Qhash_table, errb); |
581 return 0; | 581 return 0; |
582 } | 582 } |
583 | 583 |
584 static Element_Count | 584 static Elemcount |
585 decode_hash_table_size (Lisp_Object obj) | 585 decode_hash_table_size (Lisp_Object obj) |
586 { | 586 { |
587 return NILP (obj) ? HASH_TABLE_DEFAULT_SIZE : XINT (obj); | 587 return NILP (obj) ? HASH_TABLE_DEFAULT_SIZE : XINT (obj); |
588 } | 588 } |
589 | 589 |
954 | 954 |
955 return hash_table; | 955 return hash_table; |
956 } | 956 } |
957 | 957 |
958 static void | 958 static void |
959 resize_hash_table (Lisp_Hash_Table *ht, Element_Count new_size) | 959 resize_hash_table (Lisp_Hash_Table *ht, Elemcount new_size) |
960 { | 960 { |
961 hentry *old_entries, *new_entries, *sentinel, *e; | 961 hentry *old_entries, *new_entries, *sentinel, *e; |
962 Element_Count old_size; | 962 Elemcount old_size; |
963 | 963 |
964 old_size = ht->size; | 964 old_size = ht->size; |
965 ht->size = new_size; | 965 ht->size = new_size; |
966 | 966 |
967 old_entries = ht->hentries; | 967 old_entries = ht->hentries; |
972 compute_hash_table_derived_values (ht); | 972 compute_hash_table_derived_values (ht); |
973 | 973 |
974 for (e = old_entries, sentinel = e + old_size; e < sentinel; e++) | 974 for (e = old_entries, sentinel = e + old_size; e < sentinel; e++) |
975 if (!HENTRY_CLEAR_P (e)) | 975 if (!HENTRY_CLEAR_P (e)) |
976 { | 976 { |
977 hentry *probe = new_entries + HASH_CODE (e->key, ht); | 977 hentry *probe = new_entries + HASHCODE (e->key, ht); |
978 LINEAR_PROBING_LOOP (probe, new_entries, new_size) | 978 LINEAR_PROBING_LOOP (probe, new_entries, new_size) |
979 ; | 979 ; |
980 *probe = *e; | 980 *probe = *e; |
981 } | 981 } |
982 | 982 |
983 free_hentries (old_entries, old_size); | 983 free_hentries (old_entries, old_size); |
984 } | 984 } |
985 | 985 |
986 /* After a hash table has been saved to disk and later restored by the | 986 /* After a hash table has been saved to disk and later restored by the |
987 portable dumper, it contains the same objects, but their addresses | 987 portable dumper, it contains the same objects, but their addresses |
988 and thus their HASH_CODEs have changed. */ | 988 and thus their HASHCODEs have changed. */ |
989 void | 989 void |
990 pdump_reorganize_hash_table (Lisp_Object hash_table) | 990 pdump_reorganize_hash_table (Lisp_Object hash_table) |
991 { | 991 { |
992 const Lisp_Hash_Table *ht = xhash_table (hash_table); | 992 const Lisp_Hash_Table *ht = xhash_table (hash_table); |
993 hentry *new_entries = xnew_array_and_zero (hentry, ht->size + 1); | 993 hentry *new_entries = xnew_array_and_zero (hentry, ht->size + 1); |
994 hentry *e, *sentinel; | 994 hentry *e, *sentinel; |
995 | 995 |
996 for (e = ht->hentries, sentinel = e + ht->size; e < sentinel; e++) | 996 for (e = ht->hentries, sentinel = e + ht->size; e < sentinel; e++) |
997 if (!HENTRY_CLEAR_P (e)) | 997 if (!HENTRY_CLEAR_P (e)) |
998 { | 998 { |
999 hentry *probe = new_entries + HASH_CODE (e->key, ht); | 999 hentry *probe = new_entries + HASHCODE (e->key, ht); |
1000 LINEAR_PROBING_LOOP (probe, new_entries, ht->size) | 1000 LINEAR_PROBING_LOOP (probe, new_entries, ht->size) |
1001 ; | 1001 ; |
1002 *probe = *e; | 1002 *probe = *e; |
1003 } | 1003 } |
1004 | 1004 |
1008 } | 1008 } |
1009 | 1009 |
1010 static void | 1010 static void |
1011 enlarge_hash_table (Lisp_Hash_Table *ht) | 1011 enlarge_hash_table (Lisp_Hash_Table *ht) |
1012 { | 1012 { |
1013 Element_Count new_size = | 1013 Elemcount new_size = |
1014 hash_table_size ((Element_Count) ((double) ht->size * ht->rehash_size)); | 1014 hash_table_size ((Elemcount) ((double) ht->size * ht->rehash_size)); |
1015 resize_hash_table (ht, new_size); | 1015 resize_hash_table (ht, new_size); |
1016 } | 1016 } |
1017 | 1017 |
1018 static hentry * | 1018 static hentry * |
1019 find_hentry (Lisp_Object key, const Lisp_Hash_Table *ht) | 1019 find_hentry (Lisp_Object key, const Lisp_Hash_Table *ht) |
1020 { | 1020 { |
1021 hash_table_test_function_t test_function = ht->test_function; | 1021 hash_table_test_function_t test_function = ht->test_function; |
1022 hentry *entries = ht->hentries; | 1022 hentry *entries = ht->hentries; |
1023 hentry *probe = entries + HASH_CODE (key, ht); | 1023 hentry *probe = entries + HASHCODE (key, ht); |
1024 | 1024 |
1025 LINEAR_PROBING_LOOP (probe, entries, ht->size) | 1025 LINEAR_PROBING_LOOP (probe, entries, ht->size) |
1026 if (KEYS_EQUAL_P (probe->key, key, test_function)) | 1026 if (KEYS_EQUAL_P (probe->key, key, test_function)) |
1027 break; | 1027 break; |
1028 | 1028 |
1065 Subsequent entries are removed and reinserted. | 1065 Subsequent entries are removed and reinserted. |
1066 We don't use tombstones - too wasteful. */ | 1066 We don't use tombstones - too wasteful. */ |
1067 static void | 1067 static void |
1068 remhash_1 (Lisp_Hash_Table *ht, hentry *entries, hentry *probe) | 1068 remhash_1 (Lisp_Hash_Table *ht, hentry *entries, hentry *probe) |
1069 { | 1069 { |
1070 Element_Count size = ht->size; | 1070 Elemcount size = ht->size; |
1071 CLEAR_HENTRY (probe); | 1071 CLEAR_HENTRY (probe); |
1072 probe++; | 1072 probe++; |
1073 ht->count--; | 1073 ht->count--; |
1074 | 1074 |
1075 LINEAR_PROBING_LOOP (probe, entries, size) | 1075 LINEAR_PROBING_LOOP (probe, entries, size) |
1076 { | 1076 { |
1077 Lisp_Object key = probe->key; | 1077 Lisp_Object key = probe->key; |
1078 hentry *probe2 = entries + HASH_CODE (key, ht); | 1078 hentry *probe2 = entries + HASHCODE (key, ht); |
1079 LINEAR_PROBING_LOOP (probe2, entries, size) | 1079 LINEAR_PROBING_LOOP (probe2, entries, size) |
1080 if (EQ (probe2->key, key)) | 1080 if (EQ (probe2->key, key)) |
1081 /* hentry at probe doesn't need to move. */ | 1081 /* hentry at probe doesn't need to move. */ |
1082 goto continue_outer_loop; | 1082 goto continue_outer_loop; |
1083 /* Move hentry from probe to new home at probe2. */ | 1083 /* Move hentry from probe to new home at probe2. */ |
1548 } | 1548 } |
1549 } | 1549 } |
1550 | 1550 |
1551 /* Return a hash value for an array of Lisp_Objects of size SIZE. */ | 1551 /* Return a hash value for an array of Lisp_Objects of size SIZE. */ |
1552 | 1552 |
1553 Hash_Code | 1553 Hashcode |
1554 internal_array_hash (Lisp_Object *arr, int size, int depth) | 1554 internal_array_hash (Lisp_Object *arr, int size, int depth) |
1555 { | 1555 { |
1556 int i; | 1556 int i; |
1557 Hash_Code hash = 0; | 1557 Hashcode hash = 0; |
1558 depth++; | 1558 depth++; |
1559 | 1559 |
1560 if (size <= 5) | 1560 if (size <= 5) |
1561 { | 1561 { |
1562 for (i = 0; i < size; i++) | 1562 for (i = 0; i < size; i++) |
1583 few elements you hash. Thus, we only go to a short depth (5) | 1583 few elements you hash. Thus, we only go to a short depth (5) |
1584 and only hash at most 5 elements out of a vector. Theoretically | 1584 and only hash at most 5 elements out of a vector. Theoretically |
1585 we could still take 5^5 time (a big big number) to compute a | 1585 we could still take 5^5 time (a big big number) to compute a |
1586 hash, but practically this won't ever happen. */ | 1586 hash, but practically this won't ever happen. */ |
1587 | 1587 |
1588 Hash_Code | 1588 Hashcode |
1589 internal_hash (Lisp_Object obj, int depth) | 1589 internal_hash (Lisp_Object obj, int depth) |
1590 { | 1590 { |
1591 if (depth > 5) | 1591 if (depth > 5) |
1592 return 0; | 1592 return 0; |
1593 if (CONSP (obj)) | 1593 if (CONSP (obj)) |
1627 The value is returned as (HIGH . LOW). | 1627 The value is returned as (HIGH . LOW). |
1628 */ | 1628 */ |
1629 (object)) | 1629 (object)) |
1630 { | 1630 { |
1631 /* This function is pretty 32bit-centric. */ | 1631 /* This function is pretty 32bit-centric. */ |
1632 Hash_Code hash = internal_hash (object, 0); | 1632 Hashcode hash = internal_hash (object, 0); |
1633 return Fcons (hash >> 16, hash & 0xffff); | 1633 return Fcons (hash >> 16, hash & 0xffff); |
1634 } | 1634 } |
1635 #endif | 1635 #endif |
1636 | 1636 |
1637 | 1637 |