comparison BCS_HST_2024-06-19/transcribeme.txt @ 6:abb1b1e2f6fc

trying alternative sources of free speech-to-text
author Henry S Thompson <ht@inf.ed.ac.uk>
date Wed, 21 Aug 2024 19:34:07 +0100
parents
children 438dc80354b8
comparison
equal deleted inserted replaced
5:f3b043032519 6:abb1b1e2f6fc
1 (Transcribed by TurboScribe.ai. Go Unlimited to remove this message.)
2
3 [Speaker 1] (0:00 - 0:00)
4 Record.
5
6 [Speaker 2] (0:02 - 0:04)
7 It says recording here.
8
9 [Speaker 1] (0:04 - 0:16)
10 Yep, and it just, I clicked it as you spoke or just before or something like that. How are you doing? Well, I said this last time and you disagreed with me, but you look okay.
11
12 [Speaker 2] (0:16 - 0:35)
13 Yes, so I actually think I am okay this time. Good, good, good. I'm a little compromised in various ways, which I'm going to tell you about.
14
15 [Speaker 1] (0:36 - 0:37)
16 Sure, well.
17
18 [Speaker 2] (0:40 - 0:44)
19 One of them being that I haven't done my homework for a reason I want to try to explain, actually.
20
21 [Speaker 1] (0:46 - 1:50)
22 Well, I mean, it was short notice, but I figure we do this, well, I don't know, it's like going, this is a comparison I use too often in too many ways. It's like we used to do with the kids, which was that we would go to the West Coast of Scotland for the Maybank holiday weekend every year. And without paying any attention to what the weather forecast was, because you needed to book in advance to get a cheap place and so on.
23
24 And sometimes that meant famously, and family history is a good thing, eating our sandwiches in a phone booth on the ferry pier between Skye and Rase, because it was raining too hard. Didn't want to sit in the car to have our picnic. But sometimes it meant swimming off white sand beaches in Ariseg in 20 degree weather, and it looked and felt like the Caribbean.
25
26 So you win some and you lose some. And if this is not as well prepared as you'd like, then we'll talk anyway.
27
28 [Speaker 2] (1:51 - 2:57)
29 We'll talk anyway. And I have a question about substance. So here's the problem.
30
31 I have to get the final draft of the Reflections book to the press by July 8th, which deadline I'm not going to make. But I need to make it enough that my good standing with the press remains such that I can get an extension. And I think given the uncertainty about my lifespan, to say nothing of maybe just efficiency overall, I just need to do that.
32
33 So this morning, I kind of thought, look, am I going to spend the morning reading old versions of God Approximately, which I would like to do? And I slapped myself on the other wrist. Is that a well founded instructional?
34
35 [Speaker 1] (2:57 - 2:59)
36 Probably not. But anyway.
37
38 [Speaker 2] (3:01 - 3:08)
39 And have been working on that.
40
41 [Speaker 1] (3:08 - 3:13)
42 That's that. I mean, you you're the only person who can correctly set your priorities.
43
44 [Speaker 2] (3:14 - 3:19)
45 Right. So I think I have to do that. Now, July 8th is not very far away.
46
47 [Speaker 1] (3:20 - 3:20)
48 No, it's not.
49
50 [Speaker 2] (3:21 - 3:34)
51 So that might mean delaying our project by a rather short amount of time. But realism, the aforementioned realism means it'll probably mean deferring it for longer than that.
52
53 [Speaker 1] (3:34 - 3:42)
54 But understood. But we can we can reduce, at the very least, reduce the frequency. But I may try to keep it ticking over one way or another.
55
56 [Speaker 2] (3:42 - 3:46)
57 Yeah, sure, sure. Well, so here's a question, if I can just plunge in. Maybe there are other.
58
59 [Speaker 1] (3:46 - 3:46)
60 Of course.
61
62 [Speaker 2] (3:47 - 4:15)
63 Yeah, go. So I was struck when I wrote the postscript note to our last meeting. By how I was framing everything.
64
65 In terms of. Well, actually, I don't even remember the last note. Hang on a second.
66
67 Maybe I should take a look at it.
68
69 [Speaker 1] (4:16 - 4:16)
70 I should too.
71
72 [Speaker 2] (4:17 - 4:18)
73 Was it email? Probably.
74
75 [Speaker 1] (4:18 - 4:27)
76 I believe. Well, I'm sorry. If it was an email, then I don't have it.
77
78 But that doesn't mean that it's not worth looking at. All right.
79
80 [Speaker 2] (4:28 - 4:39)
81 So I'm desperately waiting.
82
83 [Speaker 1] (4:45 - 4:50)
84 I gather from Jim that some progress has been made on the map project.
85
86 [Speaker 2] (4:52 - 4:53)
87 On which project?
88
89 [Speaker 1] (4:54 - 4:57)
90 The Save Brian's Mac project.
91
92 [Speaker 2] (4:58 - 5:02)
93 Oh, yes. But not enough to have the Mac saved.
94
95 [Speaker 1] (5:05 - 5:11)
96 Well, he was hopeful of his next meeting with you, but maybe it didn't happen that way.
97
98 [Speaker 2] (5:15 - 5:16)
99 So when did I?
100
101 [Speaker 1] (5:16 - 5:23)
102 Okay, here we are. Call this week. No, that was quick thought.
103
104 It says here.
105
106 [Speaker 2] (5:23 - 5:24)
107 Oh, that's it. Okay.
108
109 [Speaker 1] (5:25 - 5:25)
110 Right.
111
112 [Speaker 2] (5:26 - 5:53)
113 All right. So yeah, I've got it. Right.
114
115 So as in the first paragraph, I say, call these the historical and metaphysical approaches.
116
117 [Speaker 1] (5:54 - 5:54)
118 Right.
119
120 [Speaker 2] (5:55 - 6:09)
121 And what I have not done is read any. So what you think you have or what you know that you have is something like version 11. Is that right?
122
123 [Speaker 1] (6:09 - 6:18)
124 That's correct. 2009 version 11, which I would say in terms of this dichotomy is entirely the historical approach.
125
126 [Speaker 2] (6:19 - 6:19)
127 Okay.
128
129 [Speaker 1] (6:19 - 6:37)
130 And I think that's consistent with the note at the top, which says, in previous versions of this, I tried to produce a metaphysics, which would underpin what I'm talking about, but didn't get far enough to make it worth reproducing or something like that.
131
132 [Speaker 2] (6:38 - 6:41)
133 And I did say in previous versions of this.
134
135 [Speaker 1] (6:41 - 8:23)
136 I believe so. Let me just get the fact of the matter in front of me, which it nearly is. In fact, wait a minute.
137
138 I'm just looking at the wrong place. This one. Yes, it is.
139
140 A number of manuscripts have been circulated under this title over the last 15 years. Right. This one lacks any sketch of a worldview exhibiting the characteristics described.
141
142 I presume that means described below as it were. Somewhat in response to the first version, which tried to provide such a view without explanation of what was interesting or mattered about it. If it seems worthwhile, I may someday incorporate all the various versions into a single long, it says short, monograph.
143
144 [Speaker 2] (8:27 - 8:28)
145 Stereograph.
146
147 [Speaker 1] (8:29 - 8:31)
148 Yes, something like that.
149
150 [Speaker 2] (8:34 - 10:57)
151 Right. Okay. Well, that's very helpful actually to me.
152
153 Bob, thank you for finding that. Yes, I think that longer monograph, the yet to be produced longer monograph is what I feel as if we're aiming at. And I don't actually know whether I made any attempt to say that these lead to the same view.
154
155 I have actually thought about that. Okay. So, let me actually recite from memory four or five sentences and tell me if they ring a bell.
156
157 Have you ever read them? Goes something like this. Start at the beginning.
158
159 That is, start at what those who'd like to start at the beginning start with. Bosons, fermions, quarks, assemblages pressed into atoms and molecules and DNA and so on and so forth. And then the second paragraph saying, of course, something like that's not a beginning.
160
161 Many will argue, whatever. And then something like, but actually it doesn't matter where we start. We'll end up in the same place.
162
163 So, in the media there would be something like other people would say start with stories or something like that. Anyway.
164
165 [Speaker 1] (10:58 - 12:12)
166 I see what you're saying. Okay. I mean, I think it's important that you, well, it changes where you go next to have something like the storyline, because otherwise it's all just about where you cut the physics.
167
168 And that I think is not enough. That's just what I think of as, I had a version of this conversation last week with my regular Quaker interlocutor. There are these two questions, which I believe, which I tend to attribute to Kant, but I may get wrong.
169
170 Why is there something rather than nothing? And how would I live my life? And if you talk to Dominicans, for instance, they will happily talk about one or the other, but usually find it challenging to see what the relationship is between likely answers to the first and likely answers to the second.
171
172 That's another way of saying what it is you're trying to bring together, I think. Right.
173
174 [Speaker 2] (12:12 - 13:08)
175 I think so. Yeah, I think so. And I think what I put in the note after the historical approach is sort of a story about how our understanding of Rameans and Bosons, as it were, has been pressed into service as a grounds for normativity and maybe objectivity and so on and so forth.
176
177 I don't think successfully, but there is...
178
179 [Speaker 1] (13:08 - 13:13)
180 That's really the first large paragraph in the email.
181
182 [Speaker 2] (13:14 - 13:19)
183 Right. Which I've now buried under lots of windows.
184
185 [Speaker 1] (13:20 - 13:32)
186 Well, the pure mechanism of classical science, then rationality with reference to Friggen logic, then normativity, and the current paradigm of deriving it from the evolutionary field, etc. Right.
187
188 [Speaker 2] (13:42 - 15:20)
189 Yeah. So then the argument would go something like this, that the only tenable version of the metaphysical approach, well, sorry, the only tenable version of both approaches ends up being indistinguishable from the tenable version of the other. And one crucial factor in that, I believe, is that both stories have to do justice to our being here.
190
191 [Speaker 1] (15:22 - 15:31)
192 Yeah. I mean, I've been thinking... You know the phrase, the thing, which I think is very bizarrely labeled, the anthropic principle.
193
194 [Speaker 2] (15:31 - 15:32)
195 Right.
196
197 [Speaker 1] (15:32 - 15:42)
198 Which attempts to dissolve the first of the Kantian questions by saying, because if there weren't something, we wouldn't be here to ask the question, get over it.
199
200 [Speaker 2] (15:45 - 16:03)
201 Yes, but I think that the anthropic principle is misapplied radically because they try to understand what the world needs to be like in order to support life or inquiry or something like that.
202
203 [Speaker 1] (16:05 - 16:31)
204 Yeah. I mean, yeah, certainly. Yeah.
205
206 What little I remember of the time I heard somebody talk about this at length was Planck's constant is what it is. And the fact that if you varied it by not very much in either direction, nothing would work isn't something that needs explanation because it evidently is the case.
207
208 [Speaker 2] (16:31 - 16:31)
209 Right.
210
211 [Speaker 1] (16:32 - 16:52)
212 And if it weren't the case, I mean, yes, exactly. It is at least of minor theoretical interest to establish what the bounding box is within which we would still be here to ask that question. But having done that, there's nothing more to be said.
213
214 [Speaker 2] (16:53 - 16:53)
215 Right.
216
217 [Speaker 1] (16:55 - 17:06)
218 But I think you're... So, I mean, I don't think that changes the availability of both projects, essentially.
219
220 [Speaker 2] (17:06 - 17:53)
221 I think that's right. And I actually think, you know, this is... Well, I'm going to have to agree to the long rather than short.
222
223 I'm assuming if I go down this pathway, but I actually think the fact... Well, as I put it, which is transparent to nobody, the ontological warrant for the epistemic fact that we use differential equations to express physical laws is actually... I mean, I don't know if I said this in the objects book, but anyway, underlies the Dysus of the world, which I think is fundamental to consciousness and self and various things like that.
224
225 [Speaker 1] (17:56 - 18:08)
226 But because of the uncertain... No, not the uncertainty because, I mean, is this... What I remember from the objects book, which I've already apologized for is very little, is about the importance of slop.
227
228 [Speaker 2] (18:09 - 18:11)
229 Yeah, no, that's a different thing.
230
231 [Speaker 1] (18:11 - 18:14)
232 That is a different thing. Okay. Nevermind then.
233
234 Rasson.
235
236 [Speaker 2] (18:20 - 18:25)
237 What's the... Rasson regardless?
238
239 [Speaker 1] (18:26 - 18:26)
240 Yeah.
241
242 [Speaker 2] (18:27 - 19:54)
243 I'm not sure I should accept the regardless just now, but yeah, the Dysus stuff is, I think, important to self. And something else that's interesting, this is going to sound a little bit like a non-sequitur, but I think it's not for obvious reasons. The fact that LLMs are based on language is, I think, possibly consequential, but possibly not the reason for their success.
244
245 Because I think the power of them stems from the fact that the relationality that they encode is so stupefyingly huge that all the content of the state of the network is bizarrely non-conceptual in the sense of that.
246
247 [Speaker 1] (19:58 - 20:16)
248 Absolutely. I mean, they got somewhere by not being representational. Well, not being representational.
249
250 Sorry, but not being explicitly representational. That no amount of additional funding to Doug Lennon and company would ever have gotten to.
251
252 [Speaker 2] (20:17 - 20:25)
253 Right, right. Exactly. How to say that well is not trivial, but I completely agree.
254
255 [Speaker 1] (20:26 - 20:44)
256 Yeah, I mean, it would be useful in the indefinitely unforeseeable future to have a conversation involving Fernando Pereira about this, because... Have you ever met Fernando? Not clearly.
257
258 [Speaker 2] (20:44 - 20:51)
259 Oh, yeah. I knew him. God knows if he was a student, but anyway, 100 years ago.
260
261 [Speaker 1] (20:51 - 22:51)
262 No, he was our student, because I did his PhD oral. Oh, I see. No, but I think he was in California at the time of the oral, so it's possible.
263
264 It doesn't matter. Anyway, he was here six months ago for a guest talk during our 60th anniversary celebrations. And the talk was interesting, but not great and not recorded.
265
266 But lunch beforehand, which was just me and him and one other person, was hugely more valuable, because he was expanding to an audience that could hear of the two of us on his anger about the fact, about the impact of his own company's work, indirectly in terms of open AI, but in chat GPT and so on. Because he's recently changed within Google, being responsible for the natural language work to being responsible for the sort of theory practice interface within Google. And he's very angry about the way in which people are treating the natural language problem as having now been solved and or being soluble only by the technologies of LLMs. But what he did for us in that conversation, and I wish I had recorded it, was give me a much clearer sense of the scale of the base model. And also the scale of the priming that it gets in order to make it a question answer.
267
268 [Speaker 2] (22:51 - 22:52)
269 Yeah. What's that called?
270
271 [Speaker 1] (22:53 - 22:56)
272 The prompt. It's not the prompt, but it's something.
273
274 [Speaker 2] (22:56 - 22:57)
275 Prompt engineering.
276
277 [Speaker 1] (22:58 - 23:05)
278 Yeah. The prompt engineering is, there are three aspects of this, I think. There is the base model.
279
280 [Speaker 2] (23:06 - 23:13)
281 Right. Which is something like 100 billion gigabytes or something.
282
283 [Speaker 1] (23:13 - 23:46)
284 Yeah. Well, it's certainly that many dimensions. And I don't know, there's this whole business about projecting to lower dimensionalities for years that I don't understand.
285
286 But there's the base model. There is the make this a question answerer, make a question answerer from this base model. And there's, what do we add to the conjunction of those two from your question?
287
288 [Speaker 2] (23:48 - 23:54)
289 And is the third of those what's called prompt engineering? I think so.
290
291 [Speaker 1] (23:55 - 23:59)
292 But I could be wrong. It doesn't matter.
293
294 [Speaker 2] (24:00 - 24:01)
295 Anyway. Yeah. Anyway.
296
297 [Speaker 1] (24:04 - 24:26)
298 Even though the interesting part in a way is in a sense from the performance point of view is not the base model, but it's the thing you make a question answerer out of it with.
299
300 [Speaker 2] (24:27 - 24:28)
301 Right. Right.
302
303 [Speaker 1] (24:29 - 25:19)
304 Because that's what the people who don't have any money scrimp on, skimp on. Right. And why you then get things which lie and fabulate and contradict themselves and in general, or start imitating Witty Tiki Ray rather than a human being or whatever it might be.
305
306 Because actually, there's another kind of farm rather than the GPU farm that you need to build something like as successful as it is as chat GPT, which is a huge farm of ordinary human beings asking questions and feeding back to the engineers the wrong answers and saying, you've got to stop this kind of answer.
307
308 [Speaker 2] (25:19 - 25:23)
309 Right. Yeah. That's a lot of trivial.
310
311 [Speaker 1] (25:24 - 25:29)
312 And that's an open-ended and in principle, impossible task.
313
314 [Speaker 2] (25:29 - 25:31)
315 Right. Interesting.
316
317 [Speaker 1] (25:32 - 25:34)
318 Anyway, that was all.
319
320 [Speaker 2] (25:34 - 25:45)
321 A total footnote. You could have expressed your thought at the beginning of your what you just said that that's what people who scrimp skimp on.
322
323 [Speaker 1] (25:46 - 26:13)
324 Yes. Something like that. Anyway.
325
326 But so I think from your perspective, it's really GPT-3 that you're interested in, which is the base model. It's now GPT-4 and they won't tell you anything about that. The only thing we have any information about is GPT-3.
327
328 Right. Well, that's the only thing I've seen published information about from Google anyway. Right.
329
330 [Speaker 2] (26:16 - 26:17)
331 Yes. I mean, I think that's...
332
333 [Speaker 1] (26:18 - 26:19)
334 Open AI. Sorry. Yeah.
335
336 [Speaker 2] (26:19 - 26:26)
337 From open AI. Yeah. I think that's what I was just talking about.
338
339 I mean, it doesn't prove that I'm not interested in the other ones.
340
341 [Speaker 1] (26:28 - 26:59)
342 But I mean, it's there, for instance, that we come back to the thing that you said, which I think is why I think Dijkstra is certainly in there is not only do they not know that there's a world that not only does that 100 million gigabytes, whatever it is, 100 million gigabytes, what it doesn't have is any obligation to the world about which...
343
344 [Speaker 2] (27:00 - 27:01)
345 Right.
346
347 [Speaker 1] (27:01 - 28:33)
348 That is some kind of representation. Right. Yeah.
349
350 But that responsibility can be decomposed in any particular instance to being only about a certain small part of the world, which amounts, I guess, in many cases, to some kind of story about reference and Dijkstra's. And it does... I am tempted to bring Jonathan back into this again, Jonathan Rees, because of his...
351
352 What he's been spending the last two or three years on is trying to articulate a story about reference, which is simply defined in terms of propositions that include this are vulnerable to changes in that. That is, they include this referring expression are vulnerable to changes in that bit of the world as a way of talking about what does that referring expression refer to? Well...
353
354 Because he's a radical empiricist, basically, he wants...
355
356 [Speaker 2] (28:33 - 28:33)
357 Right.
358
359 [Speaker 1] (28:33 - 28:35)
360 Anyway, sorry, that is taking us away now.
361
362 [Speaker 2] (28:35 - 29:59)
363 No, not entirely, because there was a title of a talk I was thinking of putting together, something like the nonverbal meaning of words. If we talk about, not only about Sussman, but let's say, and what he meant by empirical or something, but just we talk about... Well, the things we're talking about, the three parts, the base model, the delta that turns it into a question answering machine, and the prompt engineering that turns a particular prompt into a particular prompt, basically, particular question into a particular
364
365 (This file is longer than 30 minutes. Go Unlimited at TurboScribe.ai to transcribe files up to 10 hours long.)