Home Page
cover of The Case for Distributed AI: Varun Mathur, CEO of HyperspaceAI
The Case for Distributed AI: Varun Mathur, CEO of HyperspaceAI

The Case for Distributed AI: Varun Mathur, CEO of HyperspaceAI

00:00-49:00

Chat between @roydendsouza and @varun_mathur Getting inspired from Satoshi, journey learning C++, need for AI being abundant x smart x bias-free, blockchain as a social coordinating system, OpenAI/Google, need for accelerated scientific research, AI as electricity, India's need for local x decentralized AI, how to not think of AI as AGI or sentient but as a tool to help improve our lives', why not to use-case-box new innovations, Oppenheimer-complex of some powerful people in AI, and more..

PodcastAIopen source softwaredistributed systemsHyperspaceOpenAILLMsModel biasBitcoinCoinageAI as electricity
446
Plays
0
Downloads
3
Shares

Transcription

The past few weeks in the world of AI have been eventful, with billionaires leading the AI race at war in America's courtrooms. Google's Gemini AI has caused the company billions of dollars in valuation. Human creativity and labor are under attack. However, there are those like Varun Mathur, CEO of HyperspaceAI, fighting to keep AI within boundaries and ensure its benefits reach humanity. Hyperspace AI aims to democratize AI by making it local, free of bias, and abundant. The battle between open and closed AI, as seen in Elon Musk's case against OpenAI, highlights the need for openness, transparency, and distributed networks for equitable accountability and safety. AI is compared to fire, and it shouldn't be controlled by a few billionaires, but rather should be accessible to all. Hey there, so the past few weeks in the world of AI have been pretty much like getting your mind blown at warp speed, if I may say so. Billionaires that have been leading the AI race are at war in America's courtrooms. Google, the word for search on the internet. Google's Gemini AI went woke, and how? And it's cost the company billions of dollars in valuation already, and most of us may not have realized it, but human creativity and labor are under attack on every front. Especially now, it's more in focus with the tie-up between open AI, Microsoft, and figure robotics. But hold on to your horses, not all hope is lost, because there are those who are still fighting, fighting the good fight, fighting to keep AI within the guardrails, and to ensure that its benefits reach all of humanity. One such knight of the realm is Varun Mathur. He's co-founder and CEO of Hyperspace AI. He's joining me on Over the Horizon. Varun, great to have you. Yeah, thanks so much for having me, Royden. All right, so let's begin at the beginning, right? You're a desi boy in America, co-founder and CEO of Hyperspace AI. What is Hyperspace AI all about? Take us back to the genesis. Sure, so the genesis is frustration, right? Frustration with the status quo, and the root genesis, you know, my backstory, I used to work at Merrill Lynch as a data analyst, fairly happy there. And I saw, and the first realization was a few years ago as a father of a newborn. I used to think, you know, a few years down the line, AI is gonna take, you know, this world of job displacement will be there. How do I personally, myself, discover the next job? For me, I was a data analyst. How do I out-innovate myself? So I got on the entrepreneurship journey a little bit over there. And as I started exploring and looking at different systems, the first step of the way was the blockchain world. And just in the blockchain world, I was deeply inspired by the story of Satoshi Nakamoto. And this has been told many times, and I've published more about Bitcoin, Satoshi, than most people around. And the things that I learned was that if you're frustrated, and if you're driven, and then you connect enough dots, then anything is possible. And that's the inspiration which I drew a few years ago. And I realized that, okay, even in the blockchain world, there's a lot, you know, these are good social coordination systems at play, done at scale, done in the right way. There's a lot of benefits for humanity, which can happen. So for me, like packaging all of this and just compressing all of this was an essay which I wrote a couple of years ago. And that was on the lines of what's the most fantastic future for humanity possible, right? That's building a social coordination system and that play the key role. So that started Hyperspace. And as we- You know, you sound a lot like Elon Musk. You know, it's maybe different trajectories are- But do you draw inspiration from a story? You know, I don't have six kids yet and three wives, and I can't even start working on that. I am excited about the idea that, look, how much more can we push ourselves, right? Because you think about space, you think about, are we the smartest civilization out there? Have there been other civilizations? Do we have to get smart enough to a point before we discover them? Or is it smarter to not be discovered, right? So I don't want to be in a situation where we don't push the edges of my capability and then we run into a not such a friendly alien civilization. I also, the reason the venture is called Hyperspace is because that alludes to multidimensional travels, right? So we have to yet discover more than the dimensions that you know already in order to travel, not just to different planets, right? That's, you know, it's there, it's been done. It's fun to go from one rock to the other, but how do you get to the other stars and galaxies out there? You know, there is fundamental astrophysics research and work which is pending, and the world is gonna need millions of Einstein, right? So we have to accelerate scientific progress, learning, teaching, all of these things. And if we do that, then maybe we figure out more wonders of the universe. And if we do that, then maybe we travel far and wide away. And that's worth doing because what if we are the, we are it, right? What if no one else, we still have- The only light of consciousness. Only conscious life, and you know, based on Twitter, like maybe the only intelligent life, right? So there's a lot to explore for us. So that's been the key motivation. And coming on the AI side of things, I was coming in from this rock chain background, and that was the idea that let's build these social coordination systems. And I had started a team, and we were doing some thinking around there. And we saw, we saw opening, I take the world by storm. And we realized that there's something off here, right? Because, and my back story is growing up in India, and there were like two broad things which have influenced me there, right? The first being just the story of India itself. Story of India, 400 years ago, we were not paying that much attention. We had the British East India Company. They came to India and they said like, look, guys, we have figured out coins and currencies. And they basically took away local currency, introduced their own coinage, and by controlling the coinage, that's how the East India Company controlled India's destiny. So that was one realization that, okay, if we are not careful, AI being the power, it is the electricity, it is, it makes no sense for a few extremely powerful people to be holding that power. No matter how good they are today, people change, entities change, and so on. So AI fundamentally has to be local. It has to be like electricity, and there are many different dots to connect over there. So that was one part of the story. The other part was my own, I learned how to be a C++ programmer in India, and I was fortunate enough where my dad hired a personal tutor for me, right? And he said like, look, the personal tutor sat with me. C++ was quite challenging. I was going through a phase in life, but because the tutor spent that time and my dad spent that money, I was able to get into programming, and that changed the trajectory of my life, right? So that took me to places I did not even want to go, but it took me to places. So, and in the world we have today, what we are telling the next college students in India today is like, look, yes, there is this wonderful world of AI happening around the world, but it's only really available to you if you spend 20 US dollars a month, right? And if I'm not mistaken, that's 20 times the monthly internet cost in India. So, and again, connecting back to the local, right? Does this, if AI is like electricity, it needs to be priced locally. So we need to own it as well. And how do we reduce the cost? How do we make it more abundant at scale? And the last aspect also being about bias. And this is what we saw with Google's Gemini and other products where there are program managers on the US West Coast, right? Somewhere in Redmond, beautiful cities, somewhere in Silicon Valley, another beautiful city, but these are mid-level program managers who are deciding what can and cannot be taught to people around the world. So those biases, it makes no sense for us to be absorbing those biases or to teaching those biases to the next generation of people. So if you go back and we say like, look, there is a way, again, AI being electricity, then it needs to be local, needs to be free of bias, and it needs to be abundant. And there cannot be just one or two companies which control that destiny across the world. So hyperspace itself is born out of that frustration and to say, like, look, can we use local devices and can we system together in a way where it makes sense and give the power back to the people? So. Yeah, while you've been talking, I've been sharing your post on X about hyperspace, AI, and just how you've thought about it. And it's really fascinating because there's so much that we need to talk about and so much that we need to do in terms of democratizing this new and transformative technology. We've, so far in our history, whether it's been fire or the wheel or the loop or anything thereafter, it's always been an additional tool in the human toolbox, in the human toolkit. The difference is that this technology is so transformative that it very well could replace human beings and therefore raises so many questions about how we approach it, how we legislate it, how we think about it, how we talk about it. Yes, so let's start talking about all of this. I'd like to start off with your thoughts about this entire battle of open versus closed AI. We've seen Elon Musk take Sam Altman and OpenAI to court in the past two weeks. Of course, there have been various interpretations of the hows and the whys and whether he has a case or not. But I guess it's more about the openness of OpenAI and the founding principles and the founding ideas, so to speak. But it also is something that you've talked a lot about, the need for openness, the need for distributed networks, the need for, like blockchain, to have equitable accountability spread out. And because you have transparency that's built in, you have accountability. And because you have, by default, if you have accountability, you have a more, a higher level or a higher degree of safety. Yes. Yeah, I think the base idea is AI is the new fire. And instead of there being just one or two people, one or two main entities saying, like, look, we hold the fire. And it's a business model issue because no matter how big OpenAI or other companies go, they still have to go to their traders, their equity in return for GPUs, right? And then they're making this fire available to the world. And- I love your analogy about a few billionaires holding the fire and say, hey, listen, trust me with this. Yeah, so, you know, as much as, you know, there's a lot of inspiration in all of their journeys and what they've done for mankind is amazing, right? And that is great. However, we would have had a very different debate just over a year and a half ago, right? Because that's when OpenAI's close models and what they had available was really the only choice. And some other entrants came in the field and they said, like, look, you know what? We've also spent billions of dollars in training these models and we are gonna make this fire available to humanity. And Facebook came out with Lama and then Mistral came out with its models. And what we realized is this is amazing. This is partial. Now we can take those models, run on local devices and an entire open source AI industry cropped up, right? And folks who are holding regular day jobs, but, you know, when they're coming back at night, they're figuring out how do we run these models on local devices? How do we, you know, make them smarter and cheaper and so on. And where we came in from hyperspace perspective is like, we were like, this is great, right? There's so much of this different work happening here. Can we then stitch this together in a network? Because the network intelligence as a whole is what actually competes against the likes of OpenAI and other big complex AI companies. So it's a user experience journey and that's where the network approach comes into play. Okay, so let's get this straight. Where do you stand on this? Do you stand with Elon on this or do you stand with Sam Altman on this? I think I'm trying to draw our own quadrant because to me, Sam and Elon are in the same zone where it's, to them, it's essentially a business model issue, right? Because NNMs have become commoditized and no one can really figure out how do you make money from that. And when people say, hey, look, we want open source and when VCs and others are saying, we want open source, in general, what they're saying is, we want that as a go-to-market strategy. We want that to recruit people. We want people to like what we are saying. And then when it comes to a much bigger model, that will not be open and that will again be based off an API. So we have seen that play out a few times. And so that's one word, right? And it's, in my opinion, purely driven based on who has what business model. And also who has the bucks. I think- Well, what do you make of the $7 trillion that Sam Altman supposedly went looking for? And I'm sure that there's no doubt of people who would like to back him and fund him and the Emiratis and the Saudis apparently were lining up outside his door. And why not? Incidentally, just the day before this Wall Street Journal article came out, I think Sam had, if I can just pull up his tweet, there you go. Sam had tweeted out saying that, hey, look, we believe the world needs more AI infrastructure. I mean, these are facts. You need more fab capacity, energy, data centers, et cetera. Elon will say, you need the next wave is gonna be the need for step-down transformers. I mean, you need bricks to, you need this data to build the bricks. You also need the chips to make sense of all of that. It just seems as if this wave has come upon us and we're not quite aware of what all this means because a lot of it is just technical stuff. People don't really understand it, but the implications are massive. Yes, I think, I hope Sam succeeds, right? Because he's like, the world needs more infrastructure. We need a lot more investment across the board and that's needed. However, where does the $7 trillion come from? Because that would imply hyperinflation, whoever is printing this money to do this. So that part to me is not clear, right? But again, I hope there is amplification in the AI infrastructure, which happens across the board. And I also hope there's a lot more push towards modern use of the infrastructure we already have. And the models which we already have, can we make them more efficient? And what we have learned even at Hyperspace is that by thinking from a, when we have scarcity, we don't have a lot of resources. It forces you to think and to see that, hey, how can I get maximum mileage out of the existing things that you have? So- Yeah, you're forced to innovate. Forced to innovate. So I think that in combined with amplification of the world, infrastructure is needed in totality. But you don't necessarily see Sam Altman as the evil genius trying to control the world. I think I, you know, I believe people are governed by- I mean, hang on, I need to clarify. I think it's, I know I'm guilty of oversimplifying things. That's right. Yeah. But, I mean, since we're talking to an audience that would really like to know what's going on, and would really like to get an expert industry opinion about it, so they're a go. Yeah. So I don't believe Sam is evil, right? I think I don't, that personification is not there in my mind. I think he is an open AI, and Sam, and they're following the incentives which are there before them, and the game in which they find themselves in. And that game is build the most powerful AI which is possible, and be the sole beacon of that AI across the world. And over a period of time, costs come down, accessibility goes up, and that's their thing. And that's where I have a bit of a problem. When someone wants to be the sole beacon, I have a problem with that monopolistic tendency. And that scares me. Yes. And because if you're the sole beacon, you have not only a monopoly, you also become the strongest entity in that ecosystem, which means there's no way to keep, there's no system of checks and balances. There are no guardrails. Yes. I think that then becomes the new British East India Company, right? And that's the reason to think about that. Why do we need a specific dependency on this one company? Because AI is going to be pervasive in every single thing, every product we use, hardware, software, card we drive, phone we use. Whatever we do, it's gonna be everywhere. And there are a set of companies which are thinking on device or consumer force, or like let's give people the hardware themselves. It could be fancy or like pricey hardware. But if you look at what Apple is doing, right? Apple is increasingly making it more and more efficient and more powerful to have AI on device. So there's that world happening as the world of chips gets more abundant and chips get more powerful and more deeper as time goes on. That also is aligned with this. Couple of other things happening over the next few years, bandwidth is getting cheaper and faster. So all the technological trends are headed towards the power actually being with the end consumers. So the model that, okay, there would be one company and just like we don't have a single electricity company which distributes electricity around the world, that model will not work in AI either, right? So I believe there will be like a few powerful things and some enterprises will use it and so on. But on a consumer day-to-day basis, our AI is standing on our devices. It's on our phones, on our laptops, our smartphones. And the question is, can it be smart enough and where it is better than what we are getting from one big company? And I suppose wrapped into all of that also, Varun, would be greater control over your own data and a lot more of data privacy. Yes. And that data privacy is only possible if you have, I think we should think of this as the browser, right? On the browser, on any website, the browser doesn't stop you. The browser is firmly in your corner and the browser will work with you and it augments your life as a human. The moment we get into a zone where we say, look, I have to give all my data to this one big company and that one big company will use it to train its model. Unless and until I pay them $20 a month, that makes no sense at all, right? So the AI browser era would be where it's guiding us. It's guiding us into living more useful lives. It's helping us save money, save more time. Our life as a human is more efficient. Do you feel, sorry, sorry, it's interesting what you say, but it just got me thinking about the potential for personalized AI agents. Yes. Do you think that's a solution? Where it's not only to add data security, but also to an opportunity to do something that has so far been extremely challenging to personalize. So we've seen targeted ads, but even targeted ads cost a bit quite wide, right? Yes. But if you could have a personalized data, artificial intelligence agent, that would be transformative in ways that we never even thought about. Yeah, I mean, you're right. So right now we do a Google search and we see some ads next to that search, right? And we might click on them and Google is making $200 billion revenue from just ads across its platform. But agents which can act on our behalf, right? And agents which are firmly aligned with us and our motive, these agents are the ones now looking at the ad. These agents can go on, well, they can go on a number of other products or a number of other apps. And they are coming back to us with more actionable information. And so there is this new meta layer which will inevitably emerge above the browsers of today, above the app stores of today. And it's that what you spoke about, the personal agent as well, right? So, and I don't think it's the, all the approaches that you have seen so far in the industry are around, you know, here's this agent, it has this personality, it wants to be a friend. And I've been a geek, I've been a nerd all my life. I don't wanna make friends. I have trouble catching up with the friends I have in any case. Last thing I want is to help grow an agent into being a friend of mine. What I do want are agents which work for me behind the scenes and they get the job done, right? So I don't care about their personality and how they talk. I care about, you know, are they finding good stocks for me or are they telling me to download a hyperspace node in 2024 when it's still, you know, early days. What about embodied AI? Embodied. Do you think, I mean, personally, since you say this, do you think, yeah, embodied AI would seduce you to have a personalized agent? I think people like talking to other things online, right? And other things online, if it's, and this is a human-like conversation, human-like things, I think those things will work as well. However, I was just thinking about this morning, right? As humans, we are very quick when we see someone's, just eyes, we can tell a lot about that, right? And just five seconds of interaction, we learn a lot about. And if we don't see that happening, it's almost instinctive, right? So I might not be the person to ask about embodied agents because I was staying in a hotel like two days ago and I was trying to extend my checkout time. And basically, there was an agent on the other side, an AI agent, and could not understand what I was saying. So I asked, I got frustrated and I said, hey, look, can I speak to your manager? A human, right? Yeah, at first I asked, can I speak to a human? It didn't understand, kept on giving me choices. Then I asked, hey, look, you know what? Can I speak to your manager? And still didn't understand and then I just used some choice words and I'm like, that's my first embodied AI agent experience. I'll be avoiding companies and products who replace humans and just say, hey, no, just talk to this AI instead, right? So I think we need to empower the humans who are there as opposed to replacing humans. Yeah, interesting. All right, let's just get back to the whole issue of open versus closed AI. And I'm just gonna pull up this tweet of yours. This was in response. There you go. There we go. So this was a tweet that you posted in response to Rajiv Chandrasekhar, an Indian minister talking about AI and the Indian government has been working a lot about coming up with an AI policy and you linked it to the Swaraj that you talked about. So India is a use case where you have such diversity and you have problems of data security, data ownership, privacy. You've also got such massive potential socially, economically, demographically. We in India need to think about AI very, very seriously. What are your thoughts? I think the reason I posted this is because I saw a lot of criticism of India over the weekend, right? And criticism across the board. And I thought a polite response was needed and why the existence and what the thinking there is was important to lay out. And I view India as many distinct cultures, many distinct regions. And it's the idea of localization has made very well there. And the idea that yes, there can be distinctiveness, yet it's also integrated in a way that is exactly what needs to happen with AI as well, right? There would be many different flavors. There would be many different ways to teach the same thing. And the notion that yes, there is just one, a powerful board deciding what biases are and everybody in the world uses that one thing, that makes absolutely no sense, right? So I believe in a world of millions of smaller models, like millions of smaller models, which are very specific to their distinct culture and distinct flavors. And as a user, I should be free to use, free to choose what model I run, how do I run it? And where do I run it as well? And that freedom is critical. And that freedom also needs to come within the right user experience, because we can think about this strongly from a philosophical or ideological perspective. There can be technologist viewpoint that, hey, look, we need decentralized AI. There can be intelligent discussions about it. But most people, even me as a user, we don't care about the ideology behind it. All we care is like, look, is it getting the job done for me? Because I have other things to do. I'd rather watch this reel or I'd rather watch this YouTube video. And I don't have time to choose or be on the side of the technologist. So as we build these- You just wanted to get on with doing what you've set out to do, assign the task. Yes. So I view the task here as, if you look at the electric vehicle world, right? In the electric vehicle world, there wasn't, people aren't buying Teslas because they are electric and good for the environment. They're buying them because they're really cool cars and they happen to be electric and that makes them a really cool car as well. So- There's a distinct value proposition there. It's a distinct value proposition and something similar on AI as well, where people use local on-device AI, not because of ideology or they got inspired by a podcast, but it's because it's really cool. It makes a lot of sense. And it is in that zone because it happens to be local on-device decentralized. So I think that is the, from a user perspective, it's critical to do. Yeah, it's very interesting. We, as humans, when we interact with technology, we tend to anthropomorphize a lot of technology. I found myself, when I work with Twitter, chat, GPD, I find myself saying, please and thank you, which is crazy. I mean, the machine doesn't care. The algorithm doesn't care. I guess it's just windows of insight into, yeah. A future among the stars at homo galactica. Yeah, and our path to evolution from homo sapien to homo galactica, so, yeah. It's funny because chat, GPD, and other AI models as well, you can write them. You can tell them like, look, I'll dip you and it'll give you a better answer. So I think, you know, there will be places around the world where it puts it right in, right? It's like, yeah. Nudge, nudge, wink, wink. Yeah. All right. Hey, so let's move on. Just one last question before we do, actually. And I want to talk to you about the counter argument to open source. And that often we find the counter argument to open source is, hey, I mean, look, this is serious technology. In the wrong hands, it could cause a lot of serious harm to society and individuals. Right. Because you could, I mean, for everything from phishing, I mean, very sophisticated phishing, to hacking, to impersonation, identity theft. I mean, massive cybersecurity issues. And also, I mean, it just goes and builds on and on and on. And we're already seeing that. Let's be realistic about it, right? AI is already helping bad players, bad actors, do a lot of bad things in a much more sophisticated way. So what do you think about the counter argument that maybe we don't need to keep this technology open as much as everybody thinks we should? Sure. You know, the baseline has been, it's a tool, right? And what's a tool? A currency is a tool. Can bad people use it to amplify their badness? Yes. C++ programming language has been a tool. Good people have used it for Bitcoin. Bad people have used it to also write all kinds of things, right? So if you look at it as a word processing program, you look at it as a calculator programming language, there would be good and bad users. I would imagine a world 20 years ago, right? Google was coming online, and an extremely powerful search engine, which could help you answer any question. Should it not, should it have been limited? What if Google itself was limited? Would we have seen the amount of progress in society? Along with there have been people who use it for bad purposes, that's there too. But the good which happened in society is something I'm not sure we can even measure. So with AI and the way it's developing, if you look at it as a tool, if you look at it as math and code, and that can help me as an individual, as a regular person, make a little bit more money, save more money, make my life a bit more efficient, a little bit more happier, that's a bit, right? And that's very powerful, that's needed. There would inevitably be bad actors who would use it. And what already exists for bad actors is laws in the society, right? We don't have to write new laws, because the existing laws, the existing legal framework, if you use, if you do bad things, then society has framework to deal with that. And if those laws are not enforced, sure, that's a different thing, right? Yeah. But do we need- I guess the question, Gaurav, is, does AI help bad actors? Right. Because of the sophistication with which they can do bad things. Right. It's easier to get away with it. I think the bad actors, does it amplify more bad actors? Does it make a bad actor really have a significant, harmful impact, right? There have been, some of those cases have been made, right? And the question is, can you not do things through Google search itself, right? If you spend an hour just searching for the right things, you can get all kinds of data just from that. And if you then combine that with, say, your skills in programming languages and C, C++, then we've already seen worms and viruses and things coming up, right? So there would be more amplified with the AI world. However, the tools to battle them will also be amplified. And this is, there is, if we don't invent things, or if we hold back the progress of things, we are imagining all the bad scenarios because that part is easy to do, right? What's tough for us to imagine all the good scenarios. And if you had followed this line of thinking, then first time humans invented fire, they shouldn't have done it because you don't look at the damage. Or split the atom. I mean, you know? Yeah. Yeah. I guess we'll find out, even with this technology, we'll be wise in hindsight, perhaps. Yes. Yeah, because it's so difficult to predict how the direction and the manner in which it will evolve. Yes. Cool. But speaking of Google, that was a perfect segue to, Google is kind of alphabetized, as I say, has been Googling for redemption after Gemini went walk. Yeah. And Google's Gemini AI went a bit too walk, they're portraying African-Americans or black people as Nazis, people of different ethnicities other than German as Nazis, comparing Elon Musk to Nazis. Even the founding fathers of America were displayed, who were depicted as African-Americans. Yes. Of course, Google has paid a very heavy price for this. The company's lost billions of dollars in valuation on the stock market. Yeah. There have been calls for Sundar Pichai's resignation. I think Sundar himself has called it unacceptable in an internal memo to Google staff. There's a lot that's gone wrong there. Yes. And while we talk about, yeah, the effects, I would like to talk about the cause and what lies beneath. Yeah. And in every AI company, including yours, I'm sure, there will be these little cabals, well, teams of researchers and technologists and engineers who would be part of, let's say, an ethics and integrity and technology and accountability team. And these are the guys, these are the men and women, the faceless men and women who decide the weights and the balances and the hyperparameters. And what do you train an AI to be? It's like bringing up a child. If you train a child to think evil thoughts and do bad things, the child will grow up probably being a menace to society. And how you train an AI algorithm or an AI model is very similar to that. And everybody's seen the effect, but the cause is a lot more worrying as far as I'm concerned. Have we given up too much of our common interest to these small little groups of faceless men and women who work behind closed doors in these AI companies and decide the very nature of AI algorithms, the very nature of these AI models? Yes. I don't blame a company specifically. To me, the issue is with the structure and the incentives at play for that. You imagine someone gets hired as a mid-level program manager at one of these big companies, Google or elsewhere. That person has an objective, right? And that person will introduce his or her own biases in whatever is going on. And over a period of time, that bias gate becomes bigger and bigger and bigger, right? So that's what we are seeing manifest itself in these things. It can produce images, but then it also has these inherent biases which trapped it. And I think any other company which also tries this will go through a similar pain point as well, right? Unless they try and do a reverse bias and they go flip in the other direction. So- And that's what's happened apparently, because, I mean, look, I think we saw when ChatGPT, the early days of ChatGPT, we saw a natural bias because of the data sets that these models were trained on. And so I remember talking to a friend of mine on an earlier podcast, Hassan Ragab. He's an Egyptian artist and an architect. And he was trying to build or create images of Cairo, the city in which he grew up in. And he couldn't do it successfully because most of these AI models didn't have enough of images representing non-Western cultures and histories in their data sets. And fine, to begin with, that was an evolutionary step. It was understandable. But in the here and now, in this day and age of AI, and I know it sounds a bit strange because it's just a year and a half of that AI for you, for Google to do this and swing completely the other way, right, and just overcompensate, that is a lot more serious because that alludes to conscious decisions on weights and balances of hyperparameters and how you train those models, not just about data anymore. Yes, and I think this brings us to open source AI because while Google is going on the swing back and forth, three months ago, we saw open AI having this major wardrobe crisis that went through two CEOs in a week. And that just told us that there are these very few powerful people who may not be making the wisest decisions and they currently hold humanity is what humans get taught over a period of time. So we have to take the power away from, right? And that is the entirety of what we do in our lives, right? We don't have just one. At the end of the day, this is also a cultural thing, right? So whatever the culture, the mid-level manager at Google or elsewhere is bringing in, we don't want that one culture across the world, right? We want culture to be localized. Culture has always been localized. And I've grown up enjoying Bollywood and there are like different cinemas as well. So it's localized and that only stems from just moving away from this model that, hey, look, we're gonna have two, three, four big companies, they train these big models and they're the ones making it available. So we have to keep it local. And I think unity is the strength, right? Like more, we can network different, smaller AIs together and basically build the India of elements, right? I think that's the version. So how do we, so the next logical question is how do we hold these faces men and women to account? Right. How do we put checks and balances on them? Because as with every technology, our legislators keep playing the game of catch up. And I'm afraid that this is a game where we're gonna lose if we reside ourselves with playing catch up. So how do we go about putting in these checks and balances and holding these people, these cabals to account? I think that the first thing is to, agencies will have to be extremely supportive of open source AI and anything being balanced using that. Because we've seen a very concerted push by some of these big closed AI companies to, they have this lead in AI and what they're trying to do is to try and close the gate behind them. So there'll be lobby hard in Washington and other places as well. And the executive order which came out from President Biden's late last year, it basically said that if you're releasing models which are over tens of billions of parameters, then you need our permission. So it was the, an order against open source AI and that actually fueled the movement around the centralized AI itself because people realize that, okay, we have a limited amount of time by which these things become enforced and then it gets harder and harder to be an open source AI venture. So it's quite important. And I think talking about security and other aspects, if a model is closed, right? And if the company is refusing to disclose it and if it's also being used by a lot of people, then tough questions should be asked to it. That why is your model closed? I think generally whenever regulators, as I like to push for, you know, look at these closed AI companies, regulators get involved. You'll end up with things that nobody likes in any case. So I think it's best left untouched. And we say like, look, society's existing laws are good enough and that can manage this AI revolution. We don't need new things written down. You're putting a lot of faith in these companies, Varun. Perfect. Well, I have less faith in bureaucrats and others writing just precise things because that's a function that of lobbying and who has had tea with whom there and who knows what we end up with. Yeah. No, I'll give it to you. It's really a very tough nut to crack. It's a very tough question. And there are pitfalls in landmines along the way and it's very difficult to chart a course of action that is most equitable to all. But I mean, I guess we have to start talking about it because it's now or never. And since we're talking about now or never, I think we need to talk about now just how advanced AI has got. We've seen Sora AI released by OpenAI a few weeks back, videos from prompts. And what amazes me is the sort of maturity of its temporal coherence, the ability to just consume large volumes of data in terms of video, understand the physics of our world. I mean, things like ray tracing, things like gravity and interaction between objects in a three-dimensional space and for tracking. It's phenomenal. It's mind blowing. Yes. And what was even more stunning was its ability to not just understand our world, but also simulate and imagine new worlds. And I can't help but think about the progression from intelligence and Silicon-based intelligence to Silicon-based sentience and how far along that road we've come. And remember, Sora AI is just what's been released to the public. Right. And that's what, just going back to the entire case that Elon has brought against Sam Altman and OpenAI, is I feel it's more about disclosure and his attempt to bring out what lies behind closed doors into the sanitizing sunlight of daytime and help us understand just how far along on the path to AGI there are. So let's talk about this for a bit. And let's start by defining Silicon sentience. Okay. How do you do that? Because if you can't define it, you can't recognize it. You can't identify it. Yeah. I think I, I mean, as we were having this chat and I was thinking about parrots, right? I've seen parrots which can mimic certain humans and say words. And you might think like, look, this parrot is very smart. Is the parrot getting sentient or is the parrot simply being trained where it can say things and sound like a human, right? So to me, the world of AI, it is, it sounds very smart. It can do a lot of things but it doesn't fundamentally have any consciousness. We can pretend and we can evoke that feeling but it's to me, that doesn't actually exist. And what kind of perpetuates this is, there are like a lot of very powerful people in AI, right? And all these people in AI, what they feel and it's their moral responsibility that they have so much power that they have to then think about all the evils which this sentient AI will bring for humanity. And they alone are the ones who can save us from it, right? So, so instead of, so we ended up being in a zone where we are thinking, ask EGIs here. EG is gonna be really powerful. Whereas it's not realizing like, look it's a very smart parrot, right? And are we using the parrot for our day-to-day life? The way I like to use this AI parrot is where I get on a plane. But so many would argue that the parrot is sentient. It can come across as that. And if you want to view it from that perspective you can believe that. But we are seeing a lot of demographs. We are seeing a lot of cool use case and great things could happen. Like you, we can have old movies generated before we like get on a flight and it could have the characters you imagined and you can get good entertainment out of it. It can imagine worlds. It's seen a lot of video games. All of that is amazing, right? But it doesn't inherently have a motivation to think and to be conscious like a human does. So it's not clear to me what the EGI means or is it more a power evoking term? Like, you know, the Oppenheimers of the world, right? Like, look, I have so much power. I yield it. Am I yielding it smartly enough or not? If I see that Oppenheimer complex going on with some powerful AI people. Yeah, yeah, for sure. But, you know, I ask this question often to my guests and I'm still trying to search to find the right answer. Okay. Do we need a new Turing test? I think is it necessary though? Because if we have tools which can improve our lives, we didn't ask the same question for the calculator on the word processing software or the extension, right? Why should we look at this bit of math differently, right? And it's helping us live more intelligent lives and that's good. And yes, it can do a lot of things now. And what it means is it makes us humans smarter and more efficient, right? It can now, AI systems could do grad level puzzles and math Olympiads and ID exams in India. What does that mean for students, right? Do, you know, it means we ask tougher questions, right? We change the way people are taught. So I think all that is in flux and has to be readjusted. Right. Yeah, but then because it's so powerful and because it's so useful to us, we tend to as human beings anthropomorphize stuff. Yes. And I can't help but wonder, does this also make us give the ability to fool us easier? Yes, for sure. And I think that will happen. And I think earlier on what you mentioned as well, right? AI be used for scams and stuff. So phishing scams will become more prominent. This video down the line will be used by scammers to impersonate and with my family, like, look, here's this person speaking. Here's how he speaks. He blinks here, every time in a second. So all of that will be used to phish people. And that's, we're gonna need tools to determine that. Hey, look, is it authentic or not? So that is unfortunately gonna come. So how do we predict? I mean, how do we predict the course and build in checks and balances? Is it through personalized AI agents? Correct. Or is it through greater awareness? Of course, I understand that having conversations about it and increasing public awareness about the pitfalls and what lies out there helps. Well, but what's, is there a one-shot solution? Could we hope for a one-shot solution to this? I think we are on a spinning ball, right? Going around another spinning ball. And there are other balls from the sky and rocks which keep smashing into our ball once every while, right? So I think we are on this ride together and we cannot micro-control the future, right? We cannot say like, look, we invented the fire and we're gonna box the fire around these use cases down the line, right? This is, there's gonna be good, there's gonna be bad. And we have to live as society. And there would be things on both sides of the aisle, right? There would be people who would use AI for horrible things. There would be people who would use AI to uplift many more people, hundreds of millions, billions of people out of poverty, right? And provide amazing lives to a large number of folks. So the larger, the greater good for humanity as well. And this helps us grow as a civilization, helps us discover things which we do not even know we can discover as yet. So I think if you get into the zone of putting checks and balances around it, which is what the big closed AI companies want inherently, right, it's good for their business model, they're strongly aligned and they're doing right by their shareholders. But I'm not their shareholder, I don't care, right? I care about having the power and being, and everyone around the world having this own power themselves. And so I think it's, yeah, it leads to a different mindset altogether. To be continued. To be continued, sure. Yes. All right, it's been such a wonderful time talking to you about all things AI, all things open AI, the code case. Gemini, it just, we could talk till the cows come home, but it's been such a brilliant time having you on Over the Horizon. Let me just pull up your profile on X and this is where you can follow Varun. He is at Varun underscore Martha on X and Hyperspace AI is also on X. Check it out, get involved. It's a great cause and more power to you, Varun. Thank you for your time. It's been wonderful having you. I think we have to have you back again. A lot of things left to talk about. Sure, love to be back. Thank you so much, Royden. Great chat. Thank you, Varun. Thanks so much.

Other Creators