Home Page
cover of Chaos Labs New Hires_ October 1st, 2024 (good except new hires)
Chaos Labs New Hires_ October 1st, 2024 (good except new hires)

Chaos Labs New Hires_ October 1st, 2024 (good except new hires)

Jacob Karsch

0 followers

00:00-12:22

Nothing to say, yet

Podcastspeechspeech synthesizerconversationfemale speechwoman speaking

Audio hosting, extended storage and much more

AI Mastering

Transcription

Chaos Labs, a company founded by Omer Goldberg, is using blockchain technology and oracles to address the issue of fake information in the age of AI. Oracles act as digital messengers, bringing trusted real-world information onto the blockchain. With AI making it easier to create fake information, Chaos Labs aims to build a trust layer for the internet by using oracles to verify the authenticity of information. They have raised $55 million in funding from companies like PayPal and Coinbase Ventures and have assembled an impressive team with diverse backgrounds. The use of oracles for trust can have real-world implications, such as providing accurate information for prediction markets and elections. Chaos Labs' edge proof oracle, which uses AI, can analyze and verify information from multiple sources, reducing human bias and providing objective facts. This technology can be applied in various areas where trust is crucial. Hey everyone, have you guys ever like scrolled through Twitter or TikTok and just been like, all right, is any of this real? Right. Because same, it's getting kind of wild out there. Yeah, yeah, for sure. Like we've got AI generating those super realistic faces and all the deep fakes that are just everywhere. It's honestly getting really hard to know what's real and what's, you know, completely made up. That's true. And that's actually what we're diving into today is this whole world of AI and everyone trying to figure out, okay, how do we actually know what's true anymore? And you know, we've been doing a lot of research, reading a ton of articles, and there's this one company, Chaos Labs, that's been doing some really interesting stuff that we wanted to talk about. Yeah, they're doing some fascinating work, Chaos Labs. And I think what's really interesting is that it was actually founded by Omer Goldberg, and he has a super interesting background. He used to be in the Israeli special forces. Whoa, really? Yeah. So talk about a career change. And now he's applying all of that experience to the world of something called decentralized finance, DeFi. Yeah, I doubt that. Yeah, DeFi. It's a mouthful. Basically, you can think about it as like rethinking finance, but using blockchain technology. Okay, DeFi blockchain. I'm already feeling a little lost. I feel you. It can sound very, like, buzzword-y, but honestly, it's not as complicated as it sounds. Okay, good. Just think of blockchain like a really, really secure way to keep track of information, like a ledger. And then DeFi is just a new way of doing all those things that we're used to in finance, like lending and borrowing and trading. But instead of going through a bank, you're using this blockchain technology. Got it. Okay. And what Chaos Labs is focusing on within that is something called oracles. Oracles, like mystical creatures that can tell the future. Yeah, I know, right? The name is very evocative, but no, not quite. So in the blockchain world, an oracle is kind of like a, I guess you could say, a digital messenger that brings information from the real world onto the blockchain. Okay. So think about it this way. Let's say you're building an app that, I don't know, uses weather data to allow people to bet on whether it'll rain tomorrow or something. An oracle would be what's actually responsible for feeding that weather data, taking it from a trusted source and putting it onto that app in a really secure way so that people can actually rely on that data. Okay, I think I'm starting to get it, but I'm still not quite sure how this all ties back to the whole, like, AI and trust thing that we started talking about. Totally. So this is where it gets really interesting, right? So AI makes it so easy to create all this fake information now, like crazy easy, generating really realistic text, manipulating images, manipulating videos. The possibilities with AI are kind of limitless. Yeah, and it's honestly getting kind of scary how good some of these fakes are getting. Oh, absolutely. And Goldberg actually pointed out that, and this is kind of wild, that the cost of generating a fake tweet using AI is now a fraction of a cent. So imagine if you have even a small budget, the scale of fakery that you could unleash is kind of terrifying. Oh, yeah, for sure. And all of this synthetic information that's coming out just makes it incredibly difficult to figure out, okay, what's real, what's not, what can I actually trust here? So it's not even just about whether we can personally spot a fake when we see it anymore. It's the fact that there is so much of it out there now. It's overwhelming. It's completely overwhelming. So how do we even begin to address that? Is there any hope? Well, this is where Chaos Labs comes in. So they believe that oracles, beyond just this traditional role that they play in DeFi, which we talked about, can actually be used to verify the authenticity of information in our new AI age. Oh. Yeah. So think of it kind of like a spam filter, but instead of weeding out unwanted emails, it's weeding out any sort of fake or misleading information online. Oh, wow. So they're basically trying to build a trust layer for the whole internet. Basically, yeah. They want to give people a way to know that the information they're looking at is actually true and reliable. Okay. That's a big decision, right? Building trust for the entire internet. So how do they plan to actually pull that off? What makes Chaos Labs different from anyone else with a big idea? Well, they're not just talking about it. They're actually building the technology and they've got the funding to do it. They just raised $55 million from some pretty big names like PayPal and Coinbase Ventures. Okay. So they're not talking about like a couple of guys in the garage anymore. This is serious. No, this is legit. And to get that kind of backing from those companies, it really says a lot about how much confidence people have in what they're building. Yeah. Those are some serious investors. Okay. So they've got the money, the tech, but what about the team? Building something like this has got to take a pretty unique set of skills. Oh, absolutely. And that's where things get even more interesting because they've been hiring and the team they're putting together is seriously impressive. They hired Max Button, who's a former Israeli Air Force pilot. Wow. Really? Yeah. And on top of that, he worked at the Ministry of Defense. So talk about high pressure decision making. That's quite the resume. Yeah. Right. So that kind of experience, it's invaluable when you're dealing with something like this, which is such a new field, high stakes, and the tech is changing constantly. And then there's Atta Kekulu. He's a quantitative researcher with a PhD who actually built a Turkish lexicon for understanding sentiment on social media. He built a whole lexicon just to understand how people feel about stuff on Twitter? Pretty much. Yeah. Which shows how much he understands how complex language is, especially in a place like social media where context is everything. Yeah. For sure. And then on top of that, they've also got this guy, Daniel, who goes by Whale Hunter on Dune Analytics, which if you don't know, is like the place to go for blockchain data. Yes. And he is honestly a wizard with that stuff. Can find patterns, insights, you name it. Wow. So it's like they're assembling a Justice League of data nerds. Basically, yeah. And it makes sense when you think about the problem they're trying to solve, right? Yeah. You need all those different perspectives. You need that data expertise, the security knowledge, because obviously security is huge here. Even linguists, because language is so important. It all ties together. Yeah. It all comes back to trust in the end. Exactly. Okay. So we've got the vision, the funding, the tech, the all-star team. But can you give us an example of how this actually plays out in the real world? What would using an oracle for trust even look like? Sure. So let's talk elections. Remember the 2024 Venezuelan election? Yeah. Wasn't that a whole thing? It was a mess. A total mess of conflicting data, people accusing each other of fraud and manipulation. Nobody knew what was going on. Yeah. I remember that. It was chaotic. Exactly. Chaotic is the perfect word for it. And the thing is, that uncertainty wasn't just like a media narrative. It had real world consequences. Like traders on these things called prediction markets were placing their bets based on who they thought won, not who actually won, because nobody knew. Oh, wow. So these prediction markets, this is where people can bet on the outcome of future events, right? Exactly. So how do oracles fit into all this? So one of the platforms that hosts these prediction markets, PolyMarket, was using this system where people basically voted on the outcome based on what they considered credible news sources. Okay. I mean, that makes sense. You want to base your bets on accurate information. Right. But here's the problem. Credible is subjective, especially in a situation like that election where you have incredibly polarized viewpoints. And what one person says is a reliable source, another person is saying, no way, they're totally biased or fake news, whatever. So it becomes less about the actual information and more about who you trust to tell you about it. Exactly. And that's where Chaos Labs comes in. They think that their edge proof oracle, which uses AI, can analyze and verify information from all these different sources and get rid of that human bias and just tell you what's true. So instead of relying on humans to decide what's credible and what's not, we can have this AI system that's trained to just look at the facts. Exactly. And because we'd be able to cross-reference with tons of different sources, look for inconsistencies, it'd be a lot harder to game the system. That makes a lot of sense. And the thing is, this isn't just about elections. Right? Yeah. Imagine this tech being used in other places where trust is really important, like very important. Very important. It's very important. It's very important. It's very important. It's very important. It's very important. It's very important. It's very important. It's very important. It's very important. It's very important. It's very important. It's very important. It's very important. It's very important. It's very important. It's very important. It's very important. It's very important. It's very important. It's very important. It's very important. It's very important. It's very important. It's very important. It's very important. It's very important. It's very important. It's very important. It's very important. It's very important. It's very important. It's very important. It's very important. It's very important. It's very important. It's very important. It's very important. It's very important. It's very important. It's very important. It's very important. It's very important. It's very important. It's very important. It's very important. It's a huge challenge, there's no doubt about it. And the team at Chaos Labs, they get that. It's not enough to just build a better algorithm or something. They're talking about completely changing how we think about trust, especially online. Okay, so how do we do that? Where do you even begin? Well, they talk about this idea of something called data provenance, which I know sounds very, like, technical. Yeah, a little bit. But it's actually a pretty simple idea. Okay, break it down for us. So imagine you're reading an article online, right? Data provenance is basically being able to see exactly where that article came from. Like, who wrote it? When was it published? Have there been any changes made to it since it was originally published? It's kind of like having that chain of custody for information so you know it hasn't been messed with. Oh, okay. So you can actually track the history of the information, see if anyone's been messing with it. Right. And that's where, again, blockchain is really useful because it can store that information in a way that's basically tamper-proof. So it's really hard for someone to change the data without it being super obvious. Okay, so I get how that would work with, like, a written article, but what about something like a social media post where it's harder to track those changes? Yeah, that's where it gets even more interesting because they're incorporating AI into this as well, specifically large language models. LLMs, we've talked about those. They can be super powerful but also, you know, make mistakes. Oh, yeah, totally. And that's something they're very aware of. So with their edge-proof Oracle, they're using these LLMs but for a very specific task, and that's to extract facts from information. So instead of asking an LLM to, like, write me an essay about the election, they're saying, okay, based on these 10 reputable sources, tell me who won. Okay, so it's like having a research assistant that can cut through all the noise and just give you the facts. Exactly, and the important thing is this is all transparent, so you can always go back and look at the sources, look at the code, and see how the Oracle came to its conclusion. There's no black box here. This is really making me think differently about the future of, well, everything online. Like, are we actually moving towards this future where we can trust what we see online again? I mean, it's a big question, and obviously, Chaos Labs is just one company trying to tackle this problem. But I think they're part of this larger movement that's trying to figure out, okay, how do we build a better Internet? How do we make it more trustworthy, more transparent? And ultimately, isn't it up to us, the users, to decide what we want the Internet to be? I think so. We need to demand better, not just from the platforms that we use, but also from the information that we consume, and even from ourselves. Like, we need to be more critical thinkers. We need to be more careful about what we share and what we believe. Couldn't agree more. That's such a good point. It's easy to get caught up in the outrage, you know, or just share something without really thinking about it. It is, especially now. But we're all in this together, right? I think we are. Well, this has been an amazing conversation, I have to say. We've gone from, like, AI-generated deepfakes to the future of trust and everything in between. It's been a wild ride, that's for sure. So if there's one big takeaway from all of this, what would you say it is? I think it's that, yes, the Internet is kind of a mess right now. It's chaotic. That's a good word for it. But it's not hopeless. There are people out there working on solutions, trying to make things better. Yeah, like Chaos Labs with their whole Oracle thing, it's definitely an interesting approach. And who knows? Maybe it'll actually work. Maybe we can actually get to a point where we can trust what we see online again. That's a nice thought. It is. Well, on that note, that's it for this deep dive. Thank you all for listening, and we will catch you next time.

Listen Next

Other Creators