59
|
1 *Foreword*
|
|
2
|
|
3 Brian Cantwell Smith was born in Montreal, Canada, on 1 December 1949.
|
|
4 Growing up first there and later in Cambridge, Massachusetts, he
|
|
5 remains a Canadian citizen. Multiple allegiances, sometimes
|
|
6 conflicting but mostly complementary, have characterized both his
|
|
7 personal and intellectual life ever since.
|
|
8
|
|
9 He started undergraduate study at Oberlin College in Ohio in 1967,
|
|
10 where his interests included both physics and religion but left after
|
|
11 only two years, travelling first to visit the Quaker community
|
|
12 Argenta, British Columbia, and ending up in Ottawa where he started
|
|
13 work as a programmer at the Division of Physics laboratory of the
|
|
14 National Research Council of Canada, working on a project jointly
|
|
15 involving Fermilab in Chicago and the Lawrence Research Laboratory in
|
|
16 Berkeley. Working at all three sites on PDP 9 and PDP 15
|
|
17 microcomputers, he "programmed like crazy" in machine language,
|
|
18 building systems for experimental control and data gathering.
|
|
19
|
|
20 When the project ended Brian moved back to the family home in
|
|
21 Cambridge, and started taking classes at the Massachusetts Institute
|
|
22 of Technology (MIT), studying what was then known as Social Inquiry,
|
|
23 in particular the politics of high technology. But it quickly became
|
|
24 apparent that the understanding of computing that the social
|
|
25 scientists were critiquing was not the computing that he knew as a
|
|
26 programmer, what he later came to refer to as "computing in the wild".
|
|
27
|
|
28 "What drove me out of Social Inquiry and back to [Computer Science] was
|
|
29 needing to be back in the practice. That skill was not somthing that
|
|
30 people on the outside understood."
|
|
31
|
|
32 Brian had realised that in order to legitimately critique Computer
|
|
33 Science, he needed to get clear on what computing really is: "I had to
|
|
34 go into the heart of the beast, as it were". So he applied for the PhD
|
|
35 program in Electrical Engineering and Computer Science at MIT and
|
|
36 began taking classes there.
|
|
37
|
|
38 When the MIT administration discovered Brian didn't have an
|
|
39 undergraduate degree, and so couldn't be registered for graduate
|
|
40 study, Patrick Winston, the newly-appointed head of the Artificial
|
|
41 Intelligence Laboratory, gave Brian an informal oral exam in topics
|
|
42 from the MIT undergraduate computer science curriculum and awarded him
|
|
43 the credits necessary for a degree, clearing the way for his admission
|
|
44 to the graduate program.
|
|
45
|
|
46 In 1976 Terry Winograd, who had left MIT to join the Computer Science
|
|
47 Lab at the Xerox Palo Alto Research Center (PARC), invited Brian to
|
|
48 spend the summer in the Understander Group there, where he joined in
|
|
49 the development of KRL, a Knowledge Representation Language, which
|
|
50 came to embody some of the ideas that were developed in his Masters
|
|
51 and PhD dissertations [refs].
|
|
52
|
|
53 These biographical details bring us to the brink of Brian's
|
|
54 professional life, and to the time and place where we first met. The
|
|
55 point made above about multiple allegiances can be succinctly
|
|
56 summarized by a list of the positions he has occupied since the
|
|
57 completion of his PhD a few years later:
|
|
58
|
|
59 * Member of the Scientific Staff, Xerox PARC
|
|
60 * Director, Xerox PARC System Sciences Lab
|
|
61 * Adjunct Professor of Philosophy, Stanford University
|
|
62 * Founding member of Stanford University's Center for the Study of
|
|
63 Language and Information
|
|
64 * Founding member and first president, Computer Professionals for
|
|
65 Social Responsibility
|
|
66 * President of the Society for Philosophy and Psychology
|
|
67 * Professor of Cognitive Science, Computer Science, and Philosophy,
|
|
68 Indiana University
|
|
69 * Kimberly J. Jenkins University Distinguished Professor of
|
|
70 Philosophy and New Technologies, Duke University
|
|
71 * Dean of the Faculty of Information, University of Toronto
|
|
72 * Invited keynote speaker, _Défaire l'Occident_, Plainartige, France
|
|
73 * Professor of Information, Philosophy, Cognitive Science, and the
|
|
74 History and Philosophy of Science and Technology, University of
|
|
75 Toronto
|
|
76 * Senior Fellow, Massey College, University of Toronto
|
|
77 * Reid Hoffman Professor of Artificial Intelligence and the Human,
|
|
78 University of Toronto
|
|
79
|
|
80 It was during Brian's years in Palo Alto at PARC, at first just for
|
|
81 the summer and then full-time, that the foundations were laid for the
|
|
82 work that led to this book.
|
|
83
|
|
84 "As an exercise in using KRL representational structures, Brian
|
|
85 Smith tried to describe the KRL data structures themselves in
|
|
86 KRL-0. A brief sketch was completed, and in doing it we were made
|
|
87 much more aware of the ways in which the language was inconsistent
|
|
88 and irregular. This initial sketch was the basis for much of the
|
|
89 development in KRL-1." [ref. Bobrow and Winograd 1978, "Experience
|
|
90 with KRL-O: One Cycle of a Knowledge Representation Language", in
|
|
91 _Proceedings of the Fifth International Joint Conference on
|
|
92 Artificial Intelligence_, Morgan Kaufmann Publishers, Burlington,
|
|
93 MA. Available online at
|
|
94 https://www.ijcai.org/Proceedings/77-1/Papers/032.pdf].
|
|
95
|
|
96 Brian's input into the (never completed) KRL-1 meant that not only
|
|
97 could some parts of a system's data be _about_ other parts, but that
|
|
98 this would be more than just commentary. It would actually play a role
|
|
99 in the system's operation. For KRL-1, this was initially motivated by
|
|
100 a desire to formulate aspects of knowledge representation such as
|
|
101 negation and disjunction as, if you will, knowledge about knowledge,
|
|
102 rather than as primitives built into the vocabulary of the
|
|
103 representation language itself. [elaborate this with reference to
|
|
104 old-style Semantic Nets and Bobrow and Norman ?]
|
|
105
|
|
106 Brian's development of this idea, which he termed 'reflection', is
|
|
107 documented in the papers gathered in _Legacy_. But its title
|
|
108 notwithstanding, this book is _not_ a recapitulation of that work.
|
|
109
|
|
110 There was an assumption at the heart of Brian's reflective
|
|
111 architectures, which was initially expected to occupy just one section
|
|
112 of one chapter of his PhD, as signalled in its preliminary outline
|
|
113 Table of Contents. But its resolution proved to be much more
|
|
114 problematic than expected, to the extent that it has taken
|
|
115 a lifetime of work for Brian to bring it clearly into focus.
|
|
116
|
|
117 Looking back it seems that this difficulty acted rather like the grit
|
|
118 in the oyster, stimulating Brian's wholesale reconsideration of the
|
|
119 nature of computation, and Computer Science as currently practiced,
|
|
120 which _is_ what this book is about.
|
|
121
|
|
122 You'll have to read the book to find out what that assumption was, and
|
|
123 the details of the critique of Computer Science that it led Brian to.
|
|
124
|
|
125 It may seem rather presumptuous of me to suggest that this one person
|
|
126 has accurately diagnosed a problem that a whole field of enquiry has
|
|
127 missed, to the point where they've ended up altogether stuck, unable
|
|
128 to see what they've missed. The point of the list offered above of
|
|
129 Brian's achievements and the manifest breadth of his background it
|
|
130 testifies to will I hope give sufficient grounds for suggesting that
|
|
131 it is at least possible that this indeed just might be worth checking
|
|
132 out.
|
|
133
|
|
134 As Brian himself said about this recently "That this is important
|
|
135 needs to be said. And it's not about _me_, that is, it's not
|
|
136 important because I say it is." That it's important to him does
|
|
137 however mean that his claim deserves our attention.
|
|
138
|
|
139 This is not an easy book to read, but it's a very important book, so
|
|
140 it's worth the effort. As Brian himself has said, it's written rather
|
|
141 like a detective story, in which the same underlying set of facts is
|
|
142 explored repeatedly, getting closer each time to a complete and
|
|
143 self-consistent picture. When I first read it, I said to Brian more
|
|
144 than once "But you keeping using [some term], and it's clear you mean
|
|
145 it in some important, technical, sense, but you haven't _defined_ it".
|
|
146 And he said, "Look, what I've writen should be read more like novel
|
|
147 than like a manual. What things mean will gradually take shape. Be
|
|
148 patient".
|
|
149
|
|
150 If you care about computer science, either as a practioner, or a
|
|
151 theorist, or a concerned citizen, this book matters for you. It's
|
|
152 conclusions matter, even if parts of it are not meant for you. So
|
|
153 even if you find it hard, as a computer programmer, to see why you
|
|
154 should care if the theorists have got it wrong, be patient. If you're
|
|
155 a theorist, and you find Brian's critique at best irrelevant, and at
|
|
156 worst aggresive, obnoxius and founded in misunderstanding, be patient.
|
|
157 If you're a citizen, and the technical details are off-putting, be
|
|
158 patient.
|
|
159
|
|
160 If you _are_ patient, and stay the course, when you get to the end you
|
|
161 will realise that you actually do understand the terminology now, and
|
|
162 that even though the work that remains is hugely challenging, and
|
|
163 perhaps only imperfectly grasped by Brian himself, much less the rest
|
|
164 of us, getting it done matters for all of us. As practioners and
|
|
165 theorists, we need to ask ourselves what we can do to make Brian's
|
|
166 vision a reality. As citizens, we need to cheer from the sidelines,
|
|
167 and keep asking questions. We owe him that much.
|
|
168
|
|
169 Henry S. Thompson, Toronto and Edinburgh, November 2024.
|
|
170
|
|
171 *Epigraph*
|
|
172
|
|
173 Therefore, I close with the following dramatic but also perfectly
|
|
174 serious claim: cognitive science and artificial intelligence cannot
|
|
175 succeed in their own essential aims unless and until they can
|
|
176 understand and/or implement genuine freedom and the capacity to
|
|
177 love.
|
|
178
|
|
179 John Haugeland, "Authentic Intentionality", 2002
|