Home Page
cover of AI episode 1
AI episode 1

AI episode 1

00:00-01:32:36

Nothing to say, yet

Podcastspeechinsidesmall roomsnortrustle
12
Plays
0
Downloads
0
Shares

Transcription

The speaker has been involved in education and has a background in technology. They became interested in ChatGPT, a language model, because of its ability to generate natural language responses. They were impressed by its capabilities, such as writing a poem in the style of Sylvia Plath and handling complex queries. The speaker started exploring prompt engineering and its potential impact on education. They believe that ChatGPT has the potential to revolutionize various fields, including business and education. However, they also discuss the challenges of incorporating it into existing systems and the initial reluctance of some people to use it. They compare the reaction to ChatGPT with previous technological advancements and highlight the potential for both positive and negative consequences. The speaker also mentions the need for user-friendly interfaces to encourage more people to use ChatGPT. It's on. It's on. So, you have mostly been involved in education all your life. This is a big topic which has been made by a big change in technology, which is not an area which you have depth in, which doesn't immediately appeal to you, even though it's not something you've been following or deeply familiar with until things like ChatTT. What drew you into the topic and what made you so interested? You've decided the rest of your life is going to be uncovering this mystery. Well, I've always been interested in technology and how it's used. So, I worked for like eight, nine years at Sheffield University in education and quickly transitioned into building education technology and trying to see how technology can be used to make educational experiences better. So then when ChatTT came along and it became general use, I was immediately intrigued, saw the news stories, tried it really early and was absolutely blown away by what this thing could do in literally no time. I think one of the first things I did, other than the normal just ask it a question, like what's the meaning of life, but when I was truly astounded, I asked it to write a poem in the style of Sylvia Plath with some context for the person I wanted to be included as the subject of the poem. I mean, it was slightly cliched, but you wouldn't immediately pick that poem out from 20 written by Sylvia Plath as the one that wasn't written by her. And I thought to myself, how did I manage to do that? And more importantly, how did I manage to understand the very conversational natural language that I'd used to ask it the request? So that immediately piqued my interest. And I suppose that was nearly a year ago now. And I quickly got a premium account on GPT. And when GPT-4 came along, I was even more impressed. It took a little bit longer to spit out the responses, but we're talking seconds here. And it can handle complex, multi-question, layered queries really impressively. I suppose what then really interested me was, okay, what's the difference in quality of output compared to input? And as a teacher, you know, like the quality of your input to the students changes output. And I started reading articles about prompt engineering, you know, that became a thing. And there were suddenly like jobs going for hundreds of thousands of dollars for people that can get these models to do what companies wanted them to do. So I started thinking about A-B testing with that. And so I started building a prompt library and looking into some prompt techniques and trying a couple of these out. And what excites me, I suppose, is that we're so early on in this journey. And even though there's quite a lot of literature already out there about how to use different techniques and even the effects of these techniques and what they're best suited for, it's still a completely uncharted territory. So, you know, there's that saying that all technology is indistinguishable to magic. And the surprising thing is, in a lot of ways, that's not really been the lived impression through big changes. So, you know, the biggest change, which happened when you and I was growing up, was the introduction of the smartphone. And to people in technology who saw it, and actually even a lot of people in technology who saw it, it wasn't quite clear how truly revolutionary it was, putting so many things on such a small device and what it would eventually unlock. The difference with chatGBT is even semi-interested users, not a single person who's actually used it in any depth, comes away feeling like it's anywhere but two steps of magic. So, why do you think it becomes an almost emotional visceral reaction to something like chatGBT? Whereas, you know, you and me lived through the smartphone era. We got our first smartphones. I got a HTC Desire. I thought it was really cool, but I didn't really connect the dots on how much it would change the world we live in in such profound ways. Yeah, like switching up from the Nokia, the brick Nokia, or like the Motorola Pebble that I had to a smartphone. You know, like, when we got these things, you wouldn't have thought that this thing is going to enable something like an Uber, or online shopping would double in share, or free global communication on video. But look at the difference in emotional reaction to chatGBT. So, people who are familiar, you know, the debate is is this going to deeply change the world, or this is going to radically change the world, or this might destroy the world. It's a... Yeah, it's a gradient from significance to... Well, we never had that reaction to the smartphone, and that's just in our lifetime. And the last time I remember anything like this reaction was the first internet, but that was three, four years after it really seeped into popular knowledge. Yeah, so it's interesting you ask that question. So, okay, so with smartphones and with the internet, which I think is another good example, the changes were incremental. They were quick, but Uber didn't come along one year after the smartphone was invented. And the app stores started getting populated with apps, and social media started developing. But it was relatively slow. And when I say relative, relative to chatGBT, chatGBT lands, and suddenly, if you kind of know how to ask it questions, not in a deep, prompt engineering sense, but like you understand how to phrase questions in a logical, concise manner, you suddenly see that you can go from a task that takes you days to a task that takes you, in some cases, minutes. Like, I remember reading this professor of business I can't remember the university, but he got him and like four students to design a business plan and like create the business and take it through to completion. And they did it in a day. And he was like, this would have taken the five of us two weeks. You know? Do you think this will finally hit business school fee inflation? Or is that immune to technology? Are there some things even chatGBT can't change? I mean, I think this is a really interesting question. Because for me, it's like, will it lead to education fee deflation? I think it depends on whether universities can harness it or not. Because, you know, one of the big questions for me is, will it devalue education? Will universities get to grips quickly enough with how to incorporate it in the way they assess and the way they teach? Or are students just going to be able to pass through university much more smoothly than they did before? So, you know, if I return to the emotional reaction question, because, you know, I can take your point, but, you know, let me run, you know, just a proposition by you. So, the amazing thing with all new technology is the imminent popular use is always frivolity. Yeah, yeah, you always use it for fun. I mean, one of the first things I did was get it to write me a poem. So, you know, what really changed a year after the Internet? Pornography. Yeah, for sure. And, you know, your smartphones, what was the first app which gained popularity on the App Store? It was a fart app. And, you know, I feel like part of the reason popular understanding of technology always underestimates the effect tends to be the popular use is always frivolity, which is always frowned on. But with ChatGPT, that happened. But there's a completely different emotional reaction to that. Like, actually, I mean, how frivolous can you be with text output? You know what I mean? Like, it's kind of a one step removed from, you know, like porn in text. Sure, people love an erotic novel. They're sure the erotic novel writers have been churning them out even quicker than they could splat them on the page before. But, you know, I suppose it's becoming multimodal now. So the interesting thing, I never thought about this frivolity question, but now that it's multimodal and it can output audio, you can input, like, documentation. You've got the DALI 3 or 4 now, image generator. All of this is going to come together quite quickly into perhaps revolutionizing porn again. Maybe porn 3.0. You know, like, I'll give you an example. So the first use of ChatGPT I ever saw in my life was the engineers in my office using it to write cards to their significant others, which were so much better than they'd ever done in their lives. But even in that act, it was the first time most people came about and said, wow, this is going to change the world. I mean, these engineers are smart. They're using it for the thing they're least skilled at, right? You know, romantic sentiments. So, you know, that's kind of the immediate popular reaction. I think there are two questions about the reaction, which is one, almost anyone with exposure to it almost has an immediate reaction, which is visceral and revolutionary. But actually, we were, you know, talking about this earlier. You know, most people, 60, 70% of people still haven't really even used it once. No, you shared the survey with me. It was 84% of US adults haven't used it. And that was published a couple of months ago. Yeah, so why do you think that is? Well, like everyone usually knows at least one 16-year-old in their life, yeah, who will be interested in technology, or one 30-year-old in their life. I wonder what you think about this, but my immediate reaction to this is that it's not intuitive. Like, what do I mean? Okay, so you go onto the ChatGPC website. You've got a box to put your question in and a bigger box where you can read the answer. So it's not the most exciting interface, right? So firstly, you need to like, you need to say, what do I want to ask? And I felt this when I first went on there to use it. I was like, I have no idea what to ask this thing. I froze. And perhaps, I don't know whether it's just me, but perhaps people have gone onto the website out of curiosity and maybe they've just stopped there and they're like, oh, I don't know what to ask it, you know? You know, I tend to disagree with you in some level of degree. So my mother asked me what I thought. And I said, well, you know, this is basically going to determine everything about who wins and loses. So aren't you glad you're older? And she was offended, but she smiled. Yes, just go into that in a bit more detail. And, you know, so she was asking me how to use it. She said, do you have an account? Do you have all of this? So I just put it up on the web browser. This was when you still didn't need a premium account to ask it and wait like an hour to get an answer. And to your point, the first question was, okay, how do I use it? What's the output? So I told her, you just ask it anything. And, you know, she saw it. She was kind of blown away, but she didn't. It's not like she really went back to it herself. Now, it's interesting because my mom's 67. Now, my uncle is 77. And he now uses it to write emails. After seeing it once after my... Is he still working? No. Oh, he's all worked in, yeah, 12 years. Right. So he just uses it, but he hates writing emails. You know, if you're older, it's not as fun typing. Right. But he got it. So I don't know what it is. I mean, do people not like... Is it just that expectation of that you can't believe something does something in that intelligent manner, therefore the input is too large? But I mean, those two examples are just highly contradictory. Well, in a sense, your uncle must have understood that, okay, this is a text generator, and therefore what is only ever text? It's an email. So maybe he was just like, oh, my God, I really hate writing these, so let's see how it does. I mean, you know, it's pretty smart to hit upon that thing. But I don't think people see it as a problem solver. They see it as a magic box, right? And that's overwhelming. And I feel overwhelmed sometimes when I look at it. I'm like, well, not now, but at the beginning I did, because the possibilities are endless. Anything you can put into words, you can ask it. So, okay, so now you've got custom GPTs within ChatGPT. So it's a feature. If you're a premium account holder, you can click on a button to create a custom GPT, and essentially you just write a bunch of text telling it what you want it to do, you save that, and then it works on the basis of the input that you've told it. So I saw this video of this guy that created, like, a golf caddy, a GPT golf caddy for this golf course that he uses all the time, and put in a bunch of data. You can upload documents, so he uploaded, like, the bird's-eye view of the golf course and explained what it wanted to get out. So then literally you just say, hello, and the GPT will say, hey, are you playing golf? What hole are you on? So I think now we're coming into the point where the overwhelming nature of GPT is pretty much left to those who are super interested in using it, like the coders, the engineers, and the people like me and others who just think it's fascinating. And, you know, OpenAI are now like, okay, go wild. We're going to create essentially an app store within this. That's the plan. That was the latest talk that Sam Altman gave, that they're thinking of creating, like, an Apple store for custom GPTs, and if they're super popular, then there might be, like, a small fee to purchase them. That's when it's going to pick up, I think. So it's only a few months away, which in AI terms is years, but, you know, it's coming really soon, and I think that's when people are going to use it. I think we talked about this a couple of days ago, but it's like once AI becomes integrated into life, or I don't know who I was talking about it with, but once it becomes integrated, once people don't have to think that they are using it, that's when everyone's going to be using it. So it's only by default, I think. What's the most surprising user you've seen, or use case and user you've seen? Just a normal person who, you know, somehow this is part of their life. Not a person you'd think of, not obvious, a bit like my 77-year-old uncle who pretty much figured out, he's like, all right, anything which cuts my need to write a full email is a big plus. Most surprising user? None. I haven't been surprised by anyone I've met who uses it, and that's partially because I haven't met that many people that use it. Like, yeah, when I was in Chiang Mai for a couple of months, amongst all of the kind of, like, nomad entrepreneurs, obviously everyone used it, but that's not surprising. That is the demographic that is clearly going to use it. Beyond that, I haven't met anyone, like, over, like, 60 that uses it. And I haven't met anyone who I, yeah, who shocks me that they do. I think it more shocks me the number of people who, when I've said, oh, what do you think about ChatGPT? They're like, chat what? You know, that surprised me. That surprised me. That's the shock. People who are digital natives, or you would say, you know, can use technology, know that there are two mouse buttons and what they both do, and, you know, basically not like my mum, but, actually, that's unfair to my mum. Sorry, mum. You do know what the right mouse button does. I think the sad truth is, Adam, knowing how to use two mouse buttons means you're out of touch. I'm not sure. Well, I suppose for Mac, yeah, it just means you don't use a Mac, right? Anyway. Yeah. Now you know something about me. You just have to use a desktop. Exactly. In terms of the most surprising use case, not directly. I mean, I haven't met anyone who's told me something where I thought, that is super inventive, or different. I suppose it's just articles I read, where use cases, you know. Or like, what's the most surprising way it's been integrated into any daily activity, which you've kind of been caught out of left field. As far as I'm aware, there's nothing that I'm aware of where... Have you heard of anything where you're shocked at how it's used? Well, I told you, I mean, my number one example has to be my uncle. But like, in terms of just general use, not so much the person, but like, have you heard of a way it's being used where you're like, wow, I never would have thought of using it that way. Oh, image training. Oh, but image training was like the beginnings of AI in the 90s, right? Yeah, yeah, you know, like recognize the cat. So it turns out, it's this really extensive human activity to look at an object and tell you what its attributes are and put it into a database. Yeah. And so the fact that all of a sudden you start to see companies come out with products where, you know, something which would take thousands of people have millions of errors and you can now just train a model over a couple of months and it's spitting out. So we had an example today with one of those products and we've been training this for about two months. And it was funny, we were looking at two pictures of dresses and the description which came out, it was this black knee-length dress and we put mini and the product put knee-length and we decided to merge the attributes in case there was any dispute until the model was fully trained. And everyone was going along with this process until, you know, I looked at it and someone else also looked at it and we both kind of looked at it at the same time and I was like, guys, but it's a knee-length dress. That's a midi, not a mini. You know, so it's kind of where you expect the curve to kind of go. You mean that the AI was being intuitive on what the product actually was? It could have either said mid-length or it could have said mini. Apparently the human didn't know where your knees are. Yeah, but what I'm saying is that the AI had two choices. Like it could have chosen from two options, you mean. I don't know exactly who put into it, you know. Sure. But I'd say it's more consumer-friendly. Well, depending, I mean, most women would know the distinction. But, I mean, the fact is the human was wrong. That was right. And that was the example on the screen taken from a thousand examples, which is why it was kind of hilarious. Yeah, I think on this topic of like human accuracy and AI accuracy, like we've come to the point now where medical diagnoses by, I think, CHAT-GBT may be a slightly custom-trained version, but they're as good as a really competent doctor, you know. So that means they're better, like it diagnoses better than like 75% of doctors because not all doctors are good, you know, or there's a curve. So then, like, the question is, like, does AI have this massive, like mass extinction event effect on jobs? Like, it's being talked about so much. We've listened to podcasts, like 80,000 Hours episodes on it, like really smart people talking about it and what they think the effect is going to be. Like, what's your take on that? So I think, you know, it's funny. So I think on this thing, I think, well, you have one debate, which I think let's circle back to this economics debate because I think it's a pretty big one. But, you know, I think it comes down to this question about use and adaptation. And it's interesting because never have you seen something reach, like 100 million users that fast. But relative inertia seems to be like a real thing. So like where you're seeing adoption in large cases, at least from what I'm seeing, you know, in the real world is a lot of processes which just need, you know, a few kicks of productivity in them where it's clear what you use them for. But people ready to change how anything works because something works faster. I mean, to your point, you can create, you know, use it for a lot of things. You have custom LLMs. But it seemed to have become, I mean, actually it started to be treated a bit like magic, you know, like something that for a lot of people to talk about in storybooks or in parties. But I don't really see much change on the curve in the past six months, actually. Yeah. Which is, for me, the strange thing with a lot of people asking the obvious question, okay, this is important. Therefore, what do I need to know and how can we practically use it? So, I mean, my favorite example of this is, you know, Accenture on their earnings call will go, every company needs an AI strategy. And then, you know, the McKinsey will do a survey, a private survey of senior executives and say, you know, do you think you know what your, do you think your company has one? And they'll say, we're just making shit up as we go along. I don't even know what it is. Yeah, it's interesting you say that because I saw an article, like I didn't read the article, but like the headline of it, the CEO of Coursera. And Coursera, for anyone who doesn't know, is like one of the biggest global providers of online courses. Some of them are accredited from universities. I actually had a job at university that was working with Coursera to develop courses. And like he was saying, yeah, AI is going to transform the way people learn, but there was, I keep reading articles like this where people in positions of authority and responsibility are espousing, you know, the joys of AI, but the detail is lacking. So how do you think you get over that hump? Because it's not an actual easy thing to get over. I mean, you know, it's easy to say it, but like in practical terms, I tell you, it's a lot more difficult even if you really buy that it is really something which is going to change a lot. I mean, I'll give you my practical example. The single biggest attention which AI has been given has been on things like CRM, so customer relationship management, and, you know, basically trying to upgrade your chatbots. Yeah. And actually, you know, I question that approach because a lot of people have taken it, but the reality is actually that has had a lot of automation and intelligence all within it, so actually the rate of improvement through which something like AI is really going to change things there, lower than people think. You already had a pretty decent baseline before it came along, right? I'll give you a tangible example and keep it less vague. So you've gone from things like 100% manual to with like conversational chatbots, so the best I've seen before doing things like, you know, AI is 80% of queries automated. Okay, and then it gets triaged to a person once. So when you're looking at something like an LLM, what you're basically saying is, can 80 go to 90? So the relative scope is actually harder. But that doesn't seem beyond the realm of possibility. No, no, it does. So you're halving your workforce, no? No, if you take 80% of your queries are automated. Yeah, and you go to 90, you're halving the remaining 20%. Yeah, but the relative size of the problem is relatively small, whereas, say, for example, things like data entry are still far more manual than people appreciate. Like, you know, the example I gave you, tagging an image is a pretty big industry. It employs lots and lots of people. It's not actually that automated. And the relative number of people you employ in that will be higher. Like, just data entry, master data entry, would actually be a very large portion of your entry-level white-collar workforce. So there's relatively much larger money over there. Those are tasks which couldn't have been automated, which all of a sudden can be automated. And they get looked at actually later. So, say, for example, even things like with technology, like QA, you know. Qualification. Yeah, so, you know, things like productively related tasks related to basic data entry, any basic kind of thing which checks and ensures, those are still heavily manual. That's way more money than things like CRM. But I think the reason why CRM gets a much bigger focus, if I'm truly cynical about it, is it's something involving AI which you can put in front of a customer in an obvious, obvious way. And therefore, it's obvious you're doing something about AI. Whereas, you know, I don't think that's so true of these productivity-related tasks. And, I mean, if I was being truly, truly cynical, if you looked at a lot of the blockchain stuff, it was basically meant to really, really improve all that quality assurance data handover pieces. But it didn't really radically bring that much. So, just for context for me, how was it toted as being able to improve that? Like, what was the vision there? Oh, it was just that, you know, you'd have smart contracts. You don't need a ton of bureaucracies to check people to fill that stuff. Right. But relatively, the size of the problem it was attacking was fairly small. You know, so I'm still waiting here 11 years in. People, you know, are talking about, complaining about, it took you a while before you get Uber. I'm 11 years in, into blockchain, and I've yet to figure out a single, you know, high-value, world-changing use case. Let's not even go with world-changing. Let's go with middle-officer-manager-changing. And I've yet to see it. But, yeah, I think that's honestly one of the struggles. You're trying to find out something big which you can put in front of people, but actually it hits a lot of very mundane things in very fundamental ways. And that's hard to get your head around. And I think the other part of it, which is even tougher, is if you half know it, and people who are excited, it is really hard to understand the outputs. So, like, if you were telling me, I mean, that example I just gave you there, why the hell did it pick me then? I don't friggin' know. And the interesting thing is that when you ask, as I have, I've asked people that really do know about coding, and they know enough about AI, they've read all the papers, the academic papers, and the answer is, well, the people that have designed it just don't understand how it works beyond some mechanics and the inputs that they've given to it. But the exact way that it gives the outputs, it's really not fully known. I think it's becoming more understood, because actually you can intuit the way it works from the way it outputs according to what you input. I think that's a hopeful but misleading judgment. Oh, yeah, go on. No, because you don't really know. Right. Like, empirically, you can't write the equation that... Okay, all right, let me ask a very flippant question. Can you intuit what your AI model, your large language model throws out at you any better than you can figure out how your girlfriend is mad at you? You can intuit, but you might be horribly, horribly wrong. You might be, but I suppose there are those who are better at understanding, depending on the relationship you have with someone, you can understand better. I suppose it comes down to it, like, right, so you just anthropomorphized, in a way, tangentially, LLMs. And actually, that's what people have compared them to. You remember the New York Times journalist? I apologize for not remembering his name, but he basically, like, the... The one from... Yeah, yeah, yeah, yeah. And then they put more guardrails on it so that it wouldn't do that so much. But this idea of, like, you're interacting with something that feels human-like. OpenAI and others have tried to make it slightly less uncanny valley, because I think that is quite often put into people. But I think this comes back to your original question, why do people not interact with it more? It seems really quite human. And so, I don't know. But do you think it's... You know, we talked about the inputs being so wide in the challenge. But actually, even when you have custom tools, you know, a fundamental human desire is to control output. Yeah. And if there's too large a distance between that, I mean, either... you have a more static model where you become a very good prompt engineer, and that can kind of work, but... Do you mean like a more refined model that's focused on certain types of output, so that it's actually boxed in slightly more? Ideally, these things would be learning all the time. Sure, sure. Actually, how easy is it... how easy is it to adopt a technology where most people, even if they're in charge of something, can't control the output? It's really, you know, it's a bit like... It's a hard question. It goes against a fundamental human impulse of management and introduction. Like, there's a way to use this. Yeah. It feels as risky as like hiring Dominic Cummings to be your chief strategist, because you think he's great. But, you know, there's a lot of risk involved. The unpredictability. And, you know, I think it comes down to a cognitive bias here. Exactly that point. You know, more than risk, can you ever truly take something where you don't feel like you're in control of it? And there are parts of our lives where we let that... I mean, social media is a great example where, you know, you don't really control what's being thrown at you, and it's clear other people are in charge of it. But it's a very intricate part of our lives. And we seem to be okay with it when it is frivolous. But would we be okay with it if that wasn't the case on a fundamental thing which you were in charge of? I think you've hit upon a really important point. So, from my understanding of what you said, and thinking about it, when I put queries in... So, I had to produce a business plan for a sales and marketing plan for this app when I was doing something in Cambodia. And I didn't know anything about sales and marketing, but I was like, well, I've got ChatGTT, so I'm going to use that. So, you know, I managed to put together a pretty comprehensive business plan to cover a 12-month period based on the information I had about the product, the expected return on investment that the owner wanted, and then my own sort of intelligence. And the output seemed great. What I realized is, in order for me to fully understand whether it was great, two things. Firstly, I would have to input specific enough requests and context for me to be sure that what came out was good. And second, I would have to know enough about whether the sales and marketing plan made any sense to be able to QA it at the end. So, you can put queries in. You can get things out that look really robust and great and concise and intelligent. If you don't know, then it might just be, you know, fluff to a certain extent. Let me ask you kind of a different question. Let me refine my question, because it's kind of a psychological question about adoption. I mean, your basic proposition is that, actually, the way to handle it is to do your best from what you put in and sort of keep putting in until you manage the output. But you have no control over the process of the middle. And actually, what really would make people, I think, really a bit worried is, it's not human to just accept the output. Yeah, the inputs can be what they are, but you want to know what are the lines of control when something has gone wrong, how to fix it. Or are we just in a world where, actually, the most fundamental technological change in our lives is, you know, a deep psychological change where we just keep trying to poke in the dark until we figure out what result looks like it's kind of right. And you can do that on an individual level, but how do you do that when you have like, you're like Walmart and you have millions of customers in a process and, you know, ten things go wrong out of a million, but there's still ten things. It's such a complex process to figure that out. Do you think it just kind of goes against a basic organizing principle about how we use tools and technology? Yeah, for sure. I think you're right. Not being able to peer into that box and understand exactly how things are being arrived at. Yeah, I can imagine that's really quite scary. And we're talking about kind of the economic side of things here, like adoption within companies. Well, even any large-scale organization. You know, like, okay, let's take a fun one. The military, yeah? So, let's say they have AI-powered death robots. Yeah. Or drones. Drones. Drones? Yeah, yeah, yeah. No, we didn't get MechWarrior 2000. No. And even that was kind of low-key. Still a man in the machine. Right. Would have been kind of boring to say, and he'll play for you. But, you know, so you have a bunch of, like, AI-powered, you know, killer drones, which are programmed with the rules of war. Now, say if something like that accidentally kills, like, a hundred people, who it shouldn't, forever, and you're not really in control of the process, you know, how do you deal with that in any bureaucracy or any system of management? I struggle. Yeah. Like, at least if someone made the decision, you can kind of say, he was a bad day, you know, bad judgment, you know, put him in jail or, you know, blame it on his drinking. But what are you meant to do with the LLM if you don't know that problem is a persistent error? Or how do I create it? Do you just keep changing the rules, which might kill more people through bad judgments, and say, all right, what's the ratio of, you know, bad deaths through automated systems? I mean, the same discussion... Well, I suppose you have an automated system on, like, an ITVM or a guided missile, right? It's kind of the same thing. You have an element of, you have a level of, sorry, you have, like, a failure level, right? So you can say of a hundred, a hundred guided missiles that are launched, or maybe of a thousand that are launched, three will go off target. The difference is the guided missiles don't, they might miss their targets, but, no, they're guided missiles for a reason. But someone's always choosing the target. So you're talking about something that people can maybe ascribe reasoning to. Because if you've got, like, AI-powered drones... So this was exactly like the autonomous car debate, yeah? Like, we can accept that people kill people with cars. Yeah. But what's our level of tolerance for cars killing people without people? Yeah. You know, it's a bit like that, and... And it's kind of a zero tolerance, isn't it? Because we look at this in regulation, as, like, self-driving technology, and how stringent it is. Yeah, and, you know, I... You know, the Walmart example might be strange, but, to be honest, it's not so different from any large organization. I mean, that's the whole idea behind processes, that when something goes wrong, it's a deviation from the process. And there's a background level of tolerance for its mistakes. For newer technology, there's never a background level of tolerance. It gets lower and lower with each time. I think that's my question. When all you can do is peer into the output and say, maybe we'll just say it gets right, how easy is it to adopt when you actually don't really know what it's doing? Especially as it gets larger. Like, there's a difference between, like, someone trying to get a business plan or an essay right, and someone trying to get something which involves millions of users correct. And if it's something kind of low-level, like not answering a customer query correctly, you can deal with that. But what are the things you can't deal with, which it peers into so many things? And a lot of its uses, if it's going to be things like data entry and stuff like that... Like automated decisions on loans, or prison sentences, or college admissions, or mortgages, or, you know? You know, I didn't know this because I don't apply to jobs. But I have friends who do, and they tell me about the scary world of seeking employment. And it's a completely fair expectation now in the United States that as soon as you submit your CV, it's going through an automated... Treyarch process, right? Yeah, it goes through some automated CV checker. Sure. And you might even not get through the first person, not even get through the first gate, which is not even a person, it's just a machine. Sure. And, you know, the machine doesn't exactly... You know, it's not like anyone's incumbent to tell you why you were rejected, but, like, simple question, how do you know you're not being you're not being a little bit racist here? Sure. Or, you know, what if the machine's just like, oh, you know what? I agree with the school fee system. Only people from these schools. Now it's not like people are free of those problems. They're actually probably as likely to be fully embedded in those problems. Or perhaps even more in some cases. But... But we accept that as a human condition. Or maybe we don't accept it, but we're not going to stop using humans to make these decisions on the basis that there's bias. Yeah. So, I think... I think, you know, that social media one comes down to actually this question. Because I think it's not true in all cases. Like, we've been wildly accepting of a steadily rising teen suicide rate. Or largely unaware of it. In, you know, I think a lot of people's cases. Are people really unaware? I don't know. I've listened to a couple of podcasts about it. So I don't know, but... I mean, yeah, you know, I feel like everyone knows that, you know, things like Instagram are driving people nuts. Yeah, I think this idea has been around... Well, I know that this idea has been around for at least, like, 15 years. Because one of the essay questions the students that I taught, like, over a decade ago was, you know, debate the psychological effects of social media use. You know, honestly, I don't know a single parent who's not had a conversation with their significant other over what their kids are allowed to use and by when. Sure. So, I feel it's in popular consciousness. Yeah, yeah, sure. Yeah, yeah, agreed. You know, well, the Chinese government definitely has a point of view. Yeah. Like, how many hours you can play video games. Absolutely. But we're kind of okay with that over there and those consequences. Because we feel like people should be in control of their own behavior. No matter how, like, blatantly addictive the business models are. But how true is this over here on the more boring stuff, which, you know, yeah, I can do really, really well. And I think that's where, I mean, if you take that point of view, actually, you take a view that, you know, the thing that would change most deeply in us is wildly frivolous. And actually, it wouldn't really have such a deep impact on a lot of parts of the world because, you know, those things that we struggle with, but, you know, you'd expect someone to know whether their AI girlfriend was healthy for them or not. I think you're putting too much, too much faith in people's ability to judge whether their romantic partners are healthy for them or not. I don't know. No, no, no. I'm saying from a regulatory point of view. Okay. Again, maybe China, a different point of view. Yeah, sure. But I think going back to your broader point, like this adoption and putting it into complex systems with consequences, with life or death consequences, or with consequences that can be damaging economically for large companies. Yeah, I think that's interesting. It seems to me from what I've read that companies, well, particularly open AI, and Sam Altman has this vision of just ramping up the ability of AI to the point where we get AGI as quickly as he and others around him can do it. That's not to say that he's not fully aware and probably more fully aware than most people of the potential disastrous consequences of doing this without really thinking about the guardrails that we put in place. He was pushing for a league of the players coming together and saying we need to set standards for AI safety. And you can be cynical and say that he was trying to preempt that coming from Congress. But I do honestly think from what I've heard from other podcasts and articles I've read that he is aware and he is worried about the potential for AI to just get completely out of control. So this is where I'm coming at it slightly differently. I think the problem with podcasts about AI is it tends to be people who are deeply interested. To which, this one. Yeah, and the start of this conversation. So we talked about how you got into it but even when we were talking before, one of the motivations for why we're recording this is you're like, okay, if you don't know all the technology how do you start getting this into your life? Yeah. Because that's the central problem. And I think one of the challenges about the narrative about this technology is it's almost self-evident that it's revolutionary. But actually one of the central problems is for a lot of people it's really, really not. So actually for me the question isn't really that. It's that actually how does this end up changing the way most of us would live assuming it's non-apocalyptic in a way that people can handle and integrate that change. And smartphones were brilliant in that. If you look at it, if you were a telephone company five years after the smartphone try and make money off an SMS. God help you. Try and make money off a phone call. Talk to scammers that present the HMRC UK tax office. You kind of can. There's always a customer. But this happened with telcos. They just got a substantial erosion more and more. I mean the data packages went up and that kind of compensated for it. But imagine a phone company selling you only voice and SMS. Yeah. I mean that would be one bankrupt phone company quite fast. Sure. And so things like the smartphone just integrated quite well into humanity in a way which I don't think people really appreciate how sudden it was. But in some ways I think the narrative around AI is that it is. And my practical experience is that actually it's not. It's substantially harder for people to figure out self-evident use cases for themselves and for their companies and their organizations in a way that isn't fluffy and goes on a news article sponsored by Accenture. Yeah. I think I think I mean that for me is the interesting problem. If it ends up murdering all of us then it's a moot point. We change whether we like it or not at the point of a gun. Yeah. Yeah. I think to dig into this question of how it's going to change the world and I'll go back to why why AI excites me so much because I saw you said it's self-evident that this thing would change the world. It was self-evident to me. I immediately got excited and thought my goodness like everything's going to change. But then over time I'm like but how is it going to change? And I agree with you like the last six months have seemed to be a little bit more static. You have the initial excitement and things are still going on. There are still developments being made and we've got custom GPTs and that's going to but essentially it seems to have reached a little bit of a lull. And for me I'm excited about how it can make things better. But what does that mean? Like how can it lower healthcare costs? How can it democratize education further? How can it allow different forms of creativity in the arts? You know like I'm not so much interested in whether it can like do data entry super quickly like because I don't run a company that has people that do data entry. Let me ask you something about education because you know education is perennially about to have the revolution. Yeah. Never has the sector been It's always just on the horizon. And so a long long time ago I used to work for a consultancy business for a summer internship. And the niche they ended up finding was education. And so one of the big topics at the time was these MOOCs. Yeah. Massive Open Online Courses. Yeah. I mean you've even had companies like Coursera, Khan Academy FutureLearn. Yeah. It hasn't really stopped the cost of education just keep on expanding and expanding. It hasn't. But then I think there are very identifiable reasons why that hasn't happened. So if I could ask you this in a simple way because ideally something like education this should be at the forefront. So yeah I think with education you assume it's kind of the forefront of all this change but the level of institutional barriers to changing it and frankly the level of kind of social and cultural barriers of changing an expectation around the prestige of three or four full university degree has kind of killed any impact you see in education and technology. It seems to be oddly immune to everything that changes and my obvious question is those forces are so hard to overcome what makes you think even even something like you know chat GPT and AI can change a sector you know best which has managed to hold on to its supremacy ever since the expansion of higher education in the 50s. That was the last time education really changed in a substantial way. Well I suppose like firstly I just want to acknowledge your point. I think you're right. Although I would say that with things like Codecademy and FutureLearn and Coursera there are people and we know some who have gone down that route and they've trained themselves and they've managed to either get a job Oh no no don't get me don't get me wrong. I learned accounting from Coursera and I Did you pay for the certificate at the end? I needed to learn the skills for work. So you didn't pay for the certificate. I didn't I had a boss who would murder me if I got it wrong so he didn't need a certificate and I needed to not be murdered. Nice. So I just took the course it was the entire Wharton first year undergrad general accountancy super boring professor super good class if you don't know your shit but yeah so I'm a total big fan of this I'm a believer what amazes me is despite all my beliefs and my own personal journey on them they just had zero effect on the core 90% of expenditure in that sector you know that's surprising because I think these like democratizing courses that essentially are free I mean these platforms now are moving towards a locks down model where you do actually have to get a subscription at least to even access the courses but it's really not not too expensive I think I think this is the issue it's like okay you have an idea of being employed by a company and you're saying okay shall I take these 7 Coursera courses or shall I apply to university and there's still the perception that unless you have a degree whether it be bachelor or masters or PhD depending on the job you're going for you're not even going to get through the machine stage of the triage yeah so you know like how well do you know the US education sector? I would say quite well but go ahead probably a lot better than me so you know as far as I know the US has community colleges that's the largest form of like public absolutely I know people that went to them it has a 60% drop off rate sure it is I mean the fact that you could write an entire 7 season sitcom based on a joke of the community college tells you a lot about that form of education and a good sitcom too yeah I know one of my favorites but but it has helped so many people it has helped elevate so many people out of lack of education to better things think about that, that's frankly a failing system why wouldn't something like you know online courses because actually they're much cheaper to drop out of quite frankly so they're more efficient to the downsides of the system but even those you ask most people who sign up to those community colleges so I think it's a problem of equivalence like it's a problem of equivalence like okay so if you got together with like employers or you know groups of employers and you were like okay so let's take your BA in business studies from Harvard Business School and let's put together a curated list of courses that cost you 1% or less of what that cost you and probably half the amount of time as well and that's you know probably even more than that but let's say half the amount of time and let's say you know let's actually publish that on our JDs like you know you put the job advert out and you say either this or this collection of courses like it's going to require something like that nobody would expect you to be gone Do you know who Ashwath Damodaran is? No teaches the finance course in Colombia. Okay. Probably one of the best known finance professors in the United States. Okay. I took his online courses. Okay. Fantastic. Highly recommend. Most people I know who have taken them, everyone has come out ranting and raving. No one cares that you took his course. Everyone who has taken it has taken it on the basis of personal enrichment. You can take it as a class. You can take it for free. I know tons of people who frankly are already trained and have taken this class for free. But here's the thing that's a level of coordination no one says alright, you know Ashwath's class fantastic. Just take it, pass the CFA and we don't care. CFA means what? Oh, it's that finance certification Certified Finance It's like a CPA Okay, yeah, sure. An accountant Yeah, so it's a certified finance accreditation So, you know no one will say okay, well, you could pass the most demanding test in the country to prove you have the skills and all you did was take this guy's class they'd still ask, so where's your degree in finance from? You know, it doesn't matter if you took it from the same professor there. And this dude is relatively famous, yeah? So, within that world very famous Yeah, like superbly famous and it's not like he's some random dude who's famous he's a full-time professor at Columbia Sure But I think this is kind of my question on AI, which is okay, when you look at education what are the parts how do you not end up you know, actually just using it for a bunch of productivity use cases in a university, but actually fundamentally changing an education experience, which changes the business model through which actually people get these skills. Because actually if you look at it I mean, ideally every country wants a more educated more capable workforce at the cheapest cost, theoretically Theoretically, yeah Some administrations don't understand that should be a priority Well, you spent a lot of time in universities Yeah, I did, yeah So I think at least that's what the sign on the door says Although my university is very excited their admissions rate has halved Um One of the few, yes You know Which ideally should be a bad sign, not a good sign, but you know When you went to such a prestigious university the admin load is It's hot It was selective when I went there But you wouldn't have got in if you applied now I think that's very true I feel bad for these kids But okay, I mean going back to your point How do you make sure AI doesn't end up just running a more selective admissions process rather than fundamentally changing our ability to access and democratize education and produce a higher skilled population in a way which would have never been done before The internet tried, it failed So for me it's about learning It's about personalization of education So the thing about great teachers and we've both been fortunate enough to be educated in environments where we have had really good teachers both at a high school level and at a university level, and the thing about those teachers is they relate to you Not just you, but also the other 20 people or 100 people in the lecture theatre, and they make it relatable, they make it understandable they draw analogies, they pique interest, and they explain things in a way that doesn't assume knowledge and allows you to not feel overwhelmed or fearful of what you're learning You know, the best educators are those that put things, complicated things in simple terms so you understand it I think AI can do that AI can become a personalized tutor We're talking about the higher ed landscape in the most developed countries in the world Like, I'm not getting my violin out for that cohort of students, like, those first world problems can wait I think What I see, the immediate impact of AI is allowing education for those who almost don't have it You know, obviously you need to be literate in order to use AI So, that's a different problem But it's something I want to think about, like, how do you address illiteracy, but, you know, you have to be able to digest information, but I think that AI can have great impacts on those who are undereducated not those who are kind of educated and are hoping to become more educated Yeah, I You know, it's a hard claim, actually, though, because you know, if you look at the entry process I mean, that same survey which we had looked at, it was interesting because who are the most likely users? They were high income users Yeah, and in the age bracket it's 30 to 45, right? I mean, actually, when you look at it, that's a funny thing. Age has less of an effect than income Sure Which is kind of odd Is it, though? Why? Why do you think that's odd? Because it's kind of a free technology right now But I think that underlying assumption like ignores the fact that in order to interact with ChatGPT you have to understand how to frame questions Okay, so this is where I'm getting actually, don't you think it's a technology which by definition requires higher levels of skill? Completely agree So, I don't know, I think it would kind of actually work against that personalization argument I mean... Sorry Yes, in its current form, I agree But, I think the hard work of allowing AI to be used by those who don't have those skills can be done by those who are really super interested in modifying them So, actually, you don't have to say that OpenAI and Claude and I shouldn't have mentioned two because now I should mention all of them and I don't know all of them, but you know, the swathes of AI and Copilot and all that kind of stuff Yes, it is exclusive and I think that I haven't thought about this, so this is a really interesting point It is an exclusive technology and I think that explains why so many people haven't used it and like, I don't think the data on like 84% of people haven't used it I don't think it fully captures it I'd be interested to know how many of those 84% have thought about using it got to the point of almost using it and then just being like, no I don't want to use it You know, this is the but this is the interesting question, I would have thought actually it's a much more radically exclusionary technology than an inclusive technology because it almost demands much more intellectual activity to kind of think and I think that's what excited me about it because it piqued my interest intellectually, I was like this is super interesting I want to try to understand how to like harness the power of this, how it works and it became an intensely intellectual excitement for me You know so I'll tell you what my thinking was and you have you have a lot more contacts and you actually have taught for a living I haven't but my thinking of it was actually a little bit different, that it wouldn't be personalization what it would allow you to do is actually move away from a lot of learning which would allow, actually it allows people to get to the point where they can do a lot of road tasks much faster and then actually you can focus on real thinking so I would have thought that actually it's ability to push the quality bar up I mean you know, I don't think education is really fixed, I think they really love this stupid the more rope the better, the more predictable the results I would debate on that but which government doesn't like a standardized test ah the overarching system maybe but you know like so but this is the interesting thing for me actually so you know if you take kind of the area I know best which is you know history and the way history is taught um in a lot of countries it's taught like in a positive scientific manner you know you do the test, this is the course you really don't expect a high school student to push themselves to really think independently yeah critical thinking I think is largely absent from the expectation so that's what I thought that you know once you enable people with these tools they can kind of cut all that crap out but actually and my debate has always been whether they do but I want to go back, why like why was your assumption that it would free up and like let's go with the history example why do you think why did you assume that within history and you might not have thought about it in terms of history but it's the area I did so why did you think that it would free up the next generation so when I had used it I used it on a lot of kind of basic German history like 19th century German history you know you go through it and what I saw was kind of two interesting things one the standard narrative it was fucking brilliant like if you want to write an undergrad level essay on the unification of Germany yeah you can do it in like a minute yeah you can basically do it yeah in a minute or like let's say 15 minutes you give it enough context yeah you give it enough context but like how you think through so it's a wonderfully definitive view of the world and where critical thinking comes in is how do you push yourself to go beyond what is definitive because by definition you're ingesting all of this to produce an output and a right answer and with a lot of that thinking you're not really thinking through questions which haven't been asked in an obvious way you're not really thinking how to weigh things which are not considered the most important I mean actually for me a really fun question okay standard history of the unification of Germany and it's primarily a political history according to Chatschuppe it might take other stuff in but you know write an essay arguing against the Chatschuppe essay it's much more interesting to see what an 18 year old would come in and you can use Chatschuppe to write the essay fine after you've used it could you write something against it well then you would ask Chatschuppe to refuse it point by point itself and then it would and could you think of a more complicated answer than the obvious one which comes out it depends and I think the answer is no because what you're doing then as a student is outsourcing everything to Chatschuppe and then what you would need them to do in order to be able to refute the original statement and the refutation of that and to come up with a counter-counter is to understand the material so what you're saying is okay fine so Chatschuppe can do things super quickly but unless you engage really super deeply into the subject or are you saying that Chatschuppe can be used to give you that overview in a much better way just say this is the standard narrative how well can you reason against that narrative how well can you argue against it you even want to use Chatschuppe to write the refutation fine it takes about a minute to produce can you write a better refutation even if you use Chatschuppe I think you can use Chatschuppe to do what you're saying because as far as I'm aware with what I've used you could even ask it to say okay reason to explain your reasoning to how you arrived at your reputation so as a student you can ask Chatschuppe to actually explain to you I'm using air quotes here but how it thought or what it used to arrive at the countervailing conclusions or you know so I think as a student you can use it as a resource to improve your critical thinking so this is the thing because it spits out its arguments so quickly and definitively yeah having a reasoned response which isn't definitive because this is also the reason behind the hallucination it always has to be definitive and actually where does human reasoning come at its best when it's grey and balanced and argues in its most creative way so actually that for me was the thing because like say for example that material I know really well you know it started breaking down after three queries and it can get better but the issue is how do you weigh a non-definitive response well the answer is the answer is you're not going to end up using Chatschuppe but I think the answer is different I think you are going to end up using Chatschuppe because actually like let's say for you the unification of Germany 19th century Germany you could train a custom GPT on specifically the data I mean the source material the books and then no no but this would be an interesting experiment for you to do can you get Chatschuppe a custom Chatschuppe I mean I can even show you if you want you could do it as an experiment but can you get it to get to the point where it reaches different conclusions this could be like a critical reasoning of Chatschuppe test so if you really know the material in my experience it reaches hard limits and it's actually being trained on all that material I know it has but it's also being trained on trillions of other I mean so if you go the inverse way and start to ask it like historiographies of historians it's really good sure and guess where all the I know it's being trained on that because you ask it and it's written it cannot weigh arguments in a non-definitive manner because that's the thing it's also meant to produce an output but have you used it enough to know for sure that it can't weigh it in a non-definitive manner have you asked like explicitly oh no I did explicitly like you kind of instructed it to not yeah so when you get out of dominating there so okay let's take the employment example yeah because it comes down to the same thing if you ask ChatGPT to weigh who are the best candidates based on attributes of CDs it will have an answer say we're uncomfortable with that answer yeah you you can't change the logic so you'd have to change the inputs but the question is all how do you weigh and reason inputs against each other because by definition it's something which needs to produce an output which humans do as well that is a product of human reasoning but you know ChatGPT is doing that too but we just don't fully understand how like it's weighing different elements of your query and I definitely need to look into this more because this conversation is sparking my kind of areas for investigation massively you know what we'll pick a few areas I know well and I'll show you based on what I know and it's really interesting with history because it assumes because the data tells it this is the most important argument that it always weighs those up so it almost with prompting it will give you those things but it systematically discounts less surface data and actually if you look at it that's the funny thing about human reasoning we don't always work on volume we sometimes work on belief and actually it's the way undergrads learn history in a lot of ways ChatGPT is better and actually if you look at it that would be what you're thinking was these are the best known scientists, historians this is the thing and if you're ingesting information in that way you would never put an equivalency on a lesser source by definition for the way these things are built so I think that you can request for ChatGPT to do that like if you have that thought it would be interesting to try it out and I think we should but it would be really interesting to see if see if you can put into ChatGPT the custom instructions like these are my thoughts on how you're arriving at your answer I think you're putting too much weight on the most cited sources I'd like you to do x and just as an A.D. test because coming back to what I started doing I was like I want to see how my queries impact the output and I think that's really important to really understand, we need to approach this empirically, right? You can't just throw out assumptions if you're serious about it, you can't just throw out assumptions and then just let them lie you have to be like, OK, let's test it So I think this comes down to this question of fundamental intelligence and you know if you're at the Chinese language room problem No. John Searle? OK, I'd recommend it We'll talk again, you should read it and we'll talk about it again The Chinese language room problem Yeah, John Searle John Searle, that's a recommendation, guys So Look, I think the question is What is understanding? And It's a question of It is fundamentally a question of Can machines think? And ultimately it's how you look at human cognition Is human cognition purely a composition of our ability to process large amounts of information in a way which is differentiated from other species Or is it our ability is cognition also about our ability to exclude information against other things and our ability to discount and our need to reason a reason even when there's no basis for it and there's no there's no part of it and at some level do we fundamentally understand things and it's not an easy process but actually the thing that kind of stands against the model of cognition they put in large language models is that actually cognition is a function of data because they can already ingest tons more data, yeah but it's also about our ability to wait and exclude so let me be flippant, let's say you trained a large language model in the 1600s using all available all available information you probably would have concluded it wouldn't cost very much to train that the data centers of Google would not be stretched it would be half the windmills of England powering chat GPT of the Middle Ages so let's say Middle Ages Italy or early Renaissance if you want to be fancy that's where the history buffs up by the way and you know like you have a Copernicus come along and it's flippant but it's a serious question, if you learnt in that mode wouldn't you also learn Copernicus at the stake? go on yeah because you're using all available information all the available information so the inherent biases within that information at that time would say no you're wrong you know our critical reasoning comes as much from our ability to know as it does to critically exclude I mean actually if you look at it this is deeply in the history of AI yeah so neural networks was completely discounted for 40 years completely the most idiotic approach yeah and then suddenly one Canadian fanatic had decided that no this is true you're all morons I mean actually he's a good argument against for human cognition against large language models and AI being intelligent that there is some part of understanding so that's why I got really excited because at least when I started using it for history queries the way it processes information actually works against your ability to think critically because you cannot weigh thinking in equivalent manner because like with history it goes in ways yeah we have dominant explanations which change over time yeah like I would be really surprised if you know today you took the conventional history of the Cold War I'm pretty sure someone like Gaddis would dominate the answers coming out and yeah sure and just for context you know Gaddis being one of the preeminent historians of Cold War but yeah for sure traumatized into us for A-levels yeah high school I mean okay so I'm just going to interrupt here because actually what you maybe think is knowing that this is an issue that when you get output for history queries it's impossible it becomes very difficult for you to critically analyze because the answer is so definitive that it kind of blocks it off even on a subconscious level right that's what you're saying you're like oh okay like this is the lay of the land but I would say to ChatGPT at that point like okay let's get into a debate so I want you to act in the role of an interested expert talking to me another interested expert and we're going to ask ourselves a series of questions and for each of the queries I want you to posit your answer I want you to give me your reasoning and I want you to then question me you know what we should do we should publish the queries and answers yeah someone can do a separate thing on it like we'll see if anyone can disprove it but like it's the form of reasoning which is antithetical yeah right you know so like I don't think just because you use all these tools suddenly like I mean actually this is why I think it's a wildly exclusionary technology for a couple of reasons because actually the bar on human cognition when you have tools which work like this I mean it is much much higher it's really high because how much learning do you need to do and actually you know context is important with chatQVC context is everything the context you give it is everything so this is the thing so if you have no context your output is limited just limit it limit it full stop and your ability to analyze output all of that well like I said with my sales and marketing plan I mean I wouldn't say I'm an idiot but I certainly have no real experience in sales and marketing so I could only analyze it on the basis of my own knowledge which is limited like could I really predict whether well actually no I'm lying like I could predict that the exits like financials over 3 years were wildly optimistic but I think that's just following human trends it's being trained on human data you watch Dragon's Den or what was the American one called Shark Tank people do this all the time what was it called The Apprentice yeah yeah yeah I mean all of these shows they're like oh year 1 we're going to like break Uber and year 2 it's going to be 10 million dollars and year 3 we're going to make a unicorn you know no this was pre unicorn well whatever you know people were like we'll be at 20 million dollars yeah yeah like what selling crackers I don't know these shows were made before low interest rates oh not true we've had low interest rates for a lot longer than you might I mean you remember I think The Apprentice was still when was it 2011 yeah but only for like 2 years we did it like effectively zero from like 2009 till 2022 no but you know we didn't internalize it until it was gone we just didn't internalize it until it was like 2015 2016 you know like you say this but like my running joke to my sister in 2019 is you know I know you're reading your children fairy tales but can I tell her about a unicorn with 5% interest rates right well how prescient that was how prescient that was fairy tales are facts yeah but you know if you told someone like we have you know 7% prime rates on US mortgages they'd be like Rahul what happened did the gold bust win yeah yeah well I feel like we've covered a lot of topics it's been super interesting I have to say you know my great friend Rahul encouraged me to record an episode and then was really good and said let's do it together you know like a good friend he held my hand through it and there's a lot to dive into here and I look forward to the next one but Rahul I want to say thank you very much I hope you'll be a future guest if not a future co-host I'm not going to tie you into that we'll see and yeah see you next time guys

Listen Next

Other Creators