view CR_preface.txt @ 65:343c1856a599

from UoT
author Henry S Thompson <ht@inf.ed.ac.uk>
date Tue, 03 Dec 2024 22:54:46 +0000
parents 44101e652fa3
children
line wrap: on
line source

Born December 1949.

After starting a degree at Oberlin in 1967, dropped out without
completing 3rd year.  Torn between religion and physics as an
undergraduate.


Out to BC with Katy Tolles (Father Frederick Barnes Tolles,
Philadelphia Quaker / historian) in the fall of 1969, visited Argenta,
a Quaker settlement in Argenta BC, back to Cambridge and Philadelphia
to see respective families.

Had to get out of the US (draft), so that winter took over the old job
of his brother Arnold in an NRC high-energy Physics lab, living with
Katy and Arnold in an old farmhouse in a posh neighbourhood in Ottawa.
Very snowy winter, record-breaking, 18 feet?, long driveway and a lot
of shovelling, piled up to the 2nd floor.  Involve with Ottawa QUaker
Meeting, a youth group, and a Mennonite youth group.  Stayed through
the several years.  March 1971, employer partnering with the Univ. of
Chicago Physics dept and LRL in Berkeley, went there, installed a
PDP-9 / 15, in a 40-ft Fruehof trailer, moved from Ottawa to Fermi
Lab, where Brian's office was.  Programmed in machine language (see
below).  He could 'program like crazy' in the air-conditioned trailer,
high-volume music in head-phones, but couldn't write English.  Lived
in a hotel in Hyde ? park.  They owned an Austin Mini bought for $100
in summer of 1970, working at a Quaker peace conference on Rhinestone
island in lake near Ottawa.

Katy went out to Berkeley that spring, where the experiment was to
take place.  Married in June of 1971 at Pendle Hill / Swarthmore, then
back to Berkeley.  Lived in a back yard house at Telegraph and Shannon
(?).  Legally a Canadian resident notionally in US on a business trip.
Experiment ran, wrapped and went back to Ottawa.  He wanted to stay in
US, they ended up (autumn 1971?  1972?) living with his parents in
Cambridge, where WCS was by then head of the new Center for the Study
of World Religions at Harvard.

[Applied to Graduate School at MIT in EECS, started taking some
courses, but eventually MIT admin said be couldn't be admitted w/o a
UG degree.]

Interested in being a social inquiry major, in order to study the
politics of high technology, how we get to transferring to EECS from
that goal is not clear.

It was very quickly clear that the understanding of computing that the
social scientists were critiquing was not [Programming in machine
language] the computing that I know.  So I need to get clear on what
computing really is, so that I can legitimately critique it.  So I
thought I had to go into the heart of the beast, as it were.

Terry Winograd provided the friendship and both social and 'official'
support-structure to allow Brian to start to express himself out loud,
as it were.  

Saying to Fodor, ref. Tom Swift and his procedural grandmother, that
"this is not how compilation worked", Fodor was blustery but
open-minded enough to say "this is your subject area, I'm sure you're
rightl tell me how it does work".  He and Fodor were friends, but
later Fodor "curdled".

Dog hanging on to a scented cloth -- sitting at the console of a 360
and keying in instructinos and debugging by staring at the pattern of
lights that the console froze in.

Articulating an understanding of computing that would do justice to his
intuitive understanding of computing as he had experienced it is the
theme of all his intellectual work.

"Course on compilers, I had written a compiler, I'd written a tiny OS
for a PDP-9 running a physics experiment".  Pat Winston sat me down
and took me through the requirements for a CSEE degree, and decided
he'd satisfied them all.  But he needed a Batchelor's thesis, so they
took a paper from a course he'd taken in the autumn, called "Comments
on Comments", and added some stuff, it got marked and accepted as his
thesis, so awarded the degree and could actually be enrolled as a
student under the supervision of Peter Szolovits.

[CSLI not particularly relevant]

[CPSR?]

----------
MIT, 1974++ MSc thesis _Levels, Layers and Planes_, about
architectural properties of computer science
There are no particulars in physics [ref. deiexis discussion, where is
it]
What drove me out of social inquiry and back to department 6 was
needing to be back in the practice.  That skill was not somthing that
people on the outside understood.

Lens on a conical base, watchmakers, with oil and iron filings, that
allowed you to manifest the data on digital mag tape.  No disks on the
PDP-9.  That concrete engagement with the computer affected my sense
of digitality.

I wanted there to be types, not tokens.  Set theory has no constants
(e.g. pi, e, i), functions, derivatives, intergrals are types in a
way.  Wanted a KR that didn't depend on token identity (no eq tests in
the interpreter).

LLP was an attempt to get the things, "kernel facts", of a KRL to be
types, not tokens (cf *car* and *cdr* vs. differentiation and
integration), the ontology of the computational.

[HST mentions intergral signs and script deltas] Brian says
"syncategoramaticity"

Promote the eq tests into type tests (in the interpreter).

"You want to arrange the metaphysics so that _everything_ falls out"
G. Nunberg of BCS

My imagination was arrested by essentially foundational questions
about ... this stuff.  Not interested in applications, AI as such,
etc.

Still wanted to know what computing was., remains true up to what's in
this book, CR.

Something else that makes me feel uncomfortable about CS from the
outset: Conversation with MM: for you MM science is a form of worship,
whereas science is a form of theology for me (BCS), so I look to CS
not just to manifest the glory of God, but also to explain it.

Science should do justice to that.

Being shy around Peter and Butler, something else made me skittish,
something I needed in order to be at peace: a warmth / humility.  Why
I was at peace with [John] Haugeland.  [HST: JH wasn't a
programmer. BCS: Yes, but he programmed [in] Postscript.  BCS: We
disagreed about typography].

Had a sense with JH that even though he knew a lot more philosophy
than I did, that we were looking _together_ at relative
clauses/propositional claims, not that he was scrutinising
me. [ref. Andee Rubin]

In the book I claim that deferential semantics is the heart of
intentionality.  "There is more in heaven and on earth than is drempt
of in your philosophy".  CS is fundamentally an intentional subject
matter, and that its intentional character has been hidden, and that
its use of semantics has usurped it for mechanistic purposes.

All semantical vocabulary has been redefined in mechanistic terms:
"the semantics of X" == "what will happen if X is processed"

Thereby all humility and deference is lost.

[What about Phi vs. Psi, 'full [?] procedural consequence']

If you are interested in _real_ semantics, ... what's a poor boy to
do?

Semantical issues are non-the-less still in the drivers seat---we are
happy when (+ 2 3) yields 5 because of our awareness of them.

Tracing the fate of those issues, and the vocabulary, are stories that
need told.

"Things have changed and now we do things differently."  What's
changed and how is it different?

Answer - the SDK would [be wanted to] track reference relations, not
just implementation relations.  But that's so complicated that it
couldn't possibly work.  Suppose you're defining a vector type
accessible via theta and rho or x and y.  Setting x and rho
constrains.  Compiler can ignore this, and just keep one or the
other, but the type system should 'know' the relationship of both, and
could therefore track a lot more about a program using vectors than it
does at the moment.

[HST poses a story about astronomers and air traffic controllers?]

Problem solving is not the motivation, articulating what is the case
is, to say what's true.

The effect of PSI is everything that happens, and the PHI relations
are what matters.  All constraints, norms, requirements are expressed
in terms of PHI stuff.

What does this book say that requirements engineering etc. haven't
already

[HST what about program correctness, specification languages ? etc.]

[Chapter 7?]

The gap between computer science and and programming practice is
well-known and embarrassing  but rarely foregrounded.

The vocabulary point is easy to state.

Barwise foundered on different understandings of binding a variable.

That the vocabulary issue is of huge importance needs "a clarion
statement".  This is foundational work, so I can't define my terms.

"I don't believe in definitions"

"Look, this kind of paper that I write should be read more like novel
than like a manual.  What things mean will gradually take shape"

Engender confidence that what you're about to read will make sense by
the end/in due course/by-and-by.

Vocabulary point is several points: 
 1) Points will be expressed using a vocabulary which is a term
     of art for someone/drawn from someone's technical vocabulary, perhaps not you
 2) Also, not necessarily the term of art you use for it; 
    Indeed it may be an ordinary word of English, so you may not
    realise that a term of art has gone by.
 3) There may not be terms in _any_ technical vocabulary that do what
    I need here

Taking on their meaning like a polaroid did, fill in gradually.

Consider 'effective': boundary (with non-..) is run roughshod over by 

  "Call this state 'zero'"  naming with an abstract type a concrete token.

[Argh, not really right]

When classifying these things with labels that respect/front their
ontological character

If trying to teach this stuff, it would be useful to know that we had
14 weeks, and on day 1 you can say we'll get to that in week 3.

A book on the philosophy of computation, not by a philosopher, but by
a practioner who was driven to spending their life trying to
understand what they practiced.

Come hither, one and all 

That this is important needs to be said.  And it's not about _me_,
that is, it's not important because I say it is.   But that it's
important to you does mean that that claim deserves our attention.

A delicate dance -- why have I asked you [HST] to write this, not
someone else.  Because you were there from the beginning.

NB on p. 24 of CR 0.93:

  Inevitably, as noted in the Preface, it follows that all statements
  made here are vulnerable to being differentially interpreted by
  diverse audiences—even those to which the book is primarily
  addressed.

------------
Foundations of/Philosophy of Computation

Lisp was 'broken', 2-Lisp was a flawed attempt to fix it, 3-Lisp takes
us in to new territory.

Don't think you have to be a specialist to read this book.

Effective vs non-Effective is actually new: at the book boundaries,
project onto the effective [? - it's not that everything is
term-rewriting, it's more like ].

-------------------

On first reading, before even finishing the introduction, I asked
Brian what "effective" meant, since it seemed very important, and
appeared to be being used in some technical sense, and it was not
immediately obvious to me how that related to my understanding(s) of
the word as used in ordinary language.


------------
*Foreword*

Brian Cantwell Smith was born in Montreal, Canada, on 1 December 1949.
Growing up first there and later in Cambridge, Massachusetts, he
remains a Canadian citizen.  Multiple allegiances, sometimes
conflicting but mostly complementary, have characterized both his
personal and intellectual life ever since.

He started undergraduate study at Oberlin College in Ohio in 1967,
where his interests included both physics and religion but left after
only two years, travelling first to visit the Quaker community
Argenta, British Columbia, and ending up in Ottawa where he started
work as a programmer at the Division of Physics laboratory of the
National Research Council of Canada, working on a project jointly
involving Fermilab in Chicago and the Lawrence Research Laboratory in
Berkeley.  Working at all three sites on PDP 9 and PDP 15
microcomputers, he "programmed like crazy" in machine language,
building systems for experimental control and data gathering.
  
When the project ended Brian moved back to the family home in
Cambridge, and started taking classes at the Massachusetts Institute
of Technology (MIT), studying what was then known as Social Inquiry,
in particular the politics of high technology.  But it quickly became
apparent that the understanding of computing that the social
scientists were critiquing was not the computing that he knew as a
programmer, what he later came to refer to as "computing in the wild".

"What drove me out of Social Inquiry and back to [Computer Science] was
needing to be back in the practice.  That skill was not somthing that
people on the outside understood."

Brian had realised that in order to legitimately critique Computer
Science, he needed to get clear on what computing really is: "I had to
go into the heart of the beast, as it were". So he applied for the PhD
program in Electrical Engineering and Computer Science at MIT and
began taking classes there.

When the MIT administration discovered Brian didn't have an
undergraduate degree, and so couldn't be registered for graduate
study, Patrick Winston, the newly-appointed head of the Artificial
Intelligence Laboratory, gave Brian an informal oral exam in topics
from the MIT undergraduate computer science curriculum and awarded him
the credits necessary for a degree, clearing the way for his admission
to the graduate program.

In 1976 Terry Winograd, who had left MIT to join the Computer Science
Lab at the Xerox Palo Alto Research Center (PARC), invited Brian to
spend the summer in the Understander Group there, where he joined in
the development of KRL, a Knowledge Representation Language, which
came to embody some of the ideas that were developed in his Masters
and PhD dissertations [refs].

These biographical details bring us to the brink of Brian's
professional life, and to the time and place where we first met. The
point made above about multiple allegiances can be succinctly
summarized by a list of the positions he has occupied since the
completion of his PhD a few years later:

 * Member of the Scientific Staff, Xerox PARC
 * Director, Xerox PARC System Sciences Lab
 * Adjunct Professor of Philosophy, Stanford University
 * Founding member of Stanford University's Center for the Study of
   Language and Information
 * Founding member and first president, Computer Professionals for
   Social Responsibility
 * President of the Society for Philosophy and Psychology
 * Professor of Cognitive Science, Computer Science, and Philosophy,
   Indiana University
 * Kimberly J. Jenkins University Distinguished Professor of
   Philosophy and New Technologies, Duke University
 * Dean of the Faculty of Information, University of Toronto
 * Invited keynote speaker, _DĂ©faire l'Occident_, Plainartige, France
 * Professor of Information, Philosophy, Cognitive Science, and the
   History and Philosophy of Science and Technology, University of
   Toronto
 * Senior Fellow, Massey College, University of Toronto
 * Reid Hoffman Professor of Artificial Intelligence and the Human,
   University of Toronto

It was during Brian's years in Palo Alto at PARC, at first just for
the summer and then full-time, that the foundations were laid for the
work that led to this book.

  "As an exercise in using KRL representational structures, Brian
   Smith tried to describe the KRL data structures themselves in
   KRL-0. A brief sketch was completed, and in doing it we were made
   much more aware of the ways in which the language was inconsistent
   and irregular. This initial sketch was the basis for much of the
   development in KRL-1."  [ref. Bobrow and Winograd 1978, "Experience
   with KRL-O: One Cycle of a Knowledge Representation Language", in
   _Proceedings of the Fifth International Joint Conference on
   Artificial Intelligence_, Morgan Kaufmann Publishers, Burlington,
   MA.  Available online at
   https://www.ijcai.org/Proceedings/77-1/Papers/032.pdf].

Brian's input into the (never completed) KRL-1 meant that not only
could some parts of a system's data be _about_ other parts, but that
this would be more than just commentary. It would actually play a role
in the system's operation. For KRL-1, this was initially motivated by
a desire to formulate aspects of knowledge representation such as
negation and disjunction as, if you will, knowledge about knowledge,
rather than as primitives built into the vocabulary of the
representation language itself. [elaborate this with reference to
old-style Semantic Nets and Bobrow and Norman ?]

Brian's development of this idea, which he termed 'reflection', is
documented in the papers gathered in _Legacy_.  But its title
notwithstanding, this book is _not_ a recapitulation of that work.

There was an assumption at the heart of Brian's reflective
architectures, which was initially expected to occupy just one section
of one chapter of his PhD, as signalled in its preliminary outline
Table of Contents.  But its resolution proved to be much more
problematic than expected, to the extent that it has taken
a lifetime of work for Brian to bring it clearly into focus.

Looking back it seems that this difficulty acted rather like the grit
in the oyster, stimulating Brian's wholesale reconsideration of the
nature of computation, and Computer Science as currently practiced,
which _is_ what this book is about.

You'll have to read the book to find out what that assumption was, and
the details of the critique of Computer Science that it led Brian to.

It may seem rather presumptuous of me to suggest that this one person
has accurately diagnosed a problem that a whole field of enquiry has
missed, to the point where they've ended up altogether stuck, unable
to see what they've missed.  The point of the list offered above of
Brian's achievements and the manifest breadth of his background it
testifies to will I hope give sufficient grounds for suggesting that
it is at least possible that this indeed just might be worth checking
out.

As Brian himself said about this recently "That this is important
needs to be said.  And it's not about _me_, that is, it's not
important because I say it is."  That it's important to him does
however mean that his claim deserves our attention.

This is not an easy book to read, but it's a very important book, so
it's worth the effort.  As Brian himself has said, it's written rather
like a detective story, in which the same underlying set of facts is
explored repeatedly, getting closer each time to a complete and
self-consistent picture.  When I first read it, I said to Brian more
than once "But you keeping using [some term], and it's clear you mean
it in some important, technical, sense, but you haven't _defined_ it".
And he said, "Look, what I've writen should be read more like novel
than like a manual.  What things mean will gradually take shape.  Be
patient".

If you care about computer science, either as a practioner, or a
theorist, or a concerned citizen, this book matters for you.  It's
conclusions matter, even if parts of it are not meant for you.  So
even if you find it hard, as a computer programmer, to see why you
should care if the theorists have got it wrong, be patient.  If you're
a theorist, and you find Brian's critique at best irrelevant, and at
worst aggresive, obnoxius and founded in misunderstanding, be patient.
If you're a citizen, and the technical details are off-putting, be
patient.

If you _are_ patient, and stay the course, when you get to the end you
will realise that you actually do understand the terminology now, and
that even though the work that remains is hugely challenging, and
perhaps only imperfectly grasped by Brian himself, much less the rest
of us, getting it done matters for all of us.  As practioners and
theorists, we need to ask ourselves what we can do to make Brian's
vision a reality.  As citizens, we need to cheer from the sidelines,
and keep asking questions.  We owe him that much.

Henry S. Thompson, Toronto and Edinburgh, November 2024.

*Epigraph*

   Therefore, I close with the following dramatic but also perfectly
   serious claim: cognitive science and artificial intelligence cannot
   succeed in their own essential aims unless and until they can
   understand and/or implement genuine freedom and the capacity to
   love.

       John Haugeland, "Authentic Intentionality", 2002