Home Page
cover of Chaos Labs New Hires_ October 1st, 2024 (all 5)
Chaos Labs New Hires_ October 1st, 2024 (all 5)

Chaos Labs New Hires_ October 1st, 2024 (all 5)

Jacob Karsch

0 followers

00:00-12:48

Nothing to say, yet

Podcastspeechsighzipper clothingspeech synthesizergasp
0
Plays
0
Downloads
0
Shares

Audio hosting, extended storage and many more

AI Mastering

Transcription

AI is becoming increasingly prevalent, making it difficult to distinguish what is real. Chaos Labs is working on building oracles, which are digital authenticity filters, to verify data and ensure its legitimacy. They are incorporating AI to spot deepfakes and manipulate information. One area they are exploring is prediction markets, using AI to determine outcomes and create a more trustworthy system. However, the lack of clear guidelines for oracles can lead to controversy, as seen in the Venezuelan election market example. While there are still many questions to be answered, Chaos Labs is assembling a team of experts to tackle these challenges and create scalable solutions. It seems like AI is absolutely everywhere, right? You can't even check the news anymore without seeing another wild headline about it. It's exciting and all, but honestly, have you noticed how it's getting harder to figure out what's actually real these days? Yeah, it's like we've suddenly found ourselves living in the era of fake everything. And the really scary part is, the tools to create these fakes, they're evolving way faster than we can keep up with. Reading that Chaos Labs article really made that hit home for me. I remember that tweet from their CEO, Omer Goldberg, where he broke down just how unbelievably cheap it would be for someone to basically manufacture a viral moment out of thin air. Right, like a million fake replies on Twitter using GPT-4. Yeah, all for like 250 bucks. It's insane. It's not just the sheer scale of it that's concerning though. It's the potential fallout from all of this. Yeah. I mean, think about it. We're talking about AI-generated articles that could be mistaken for like legitimate journalism, or even scarier, deepfake videos that could literally rewrite history as we know it. And then there's the whole thing with AI creating complete online identities designed to manipulate people's opinions. It's a minefield out there. It's like something straight out of Black Mirror, but it's our reality now. So that brings us to Chaos Labs and their work on this concept of oracles, which honestly sounds kind of out there at first. They're basically trying to build trust in a world where it feels like everything's constantly being called into question. It's almost like they're trying to create these digital authenticity filters. You know, that's a pretty spot-on analogy. Like, we're all used to spam filters, right? Keeping our inboxes clean and all that. But imagine if you applied that same principle to, well, everything you encounter online. Chaos Labs is building these oracles to verify data before it even makes it to our apps. So it's not just about delivering information anymore. It's about actually vouching for the legitimacy of it. Right, so it's more than, like, making sure the price of Bitcoin is accurate on some trading platform. Although I know Chaos Labs does do that too with their Edge oracle, which, by the way, is apparently protecting billions in assets, which is just, wow. Oh yeah, for sure. But that's really just the tip of the iceberg when it comes to what they're trying to do. Think about online reviews, for example. How can you be sure those are from actual customers? What if they're just AI bots paid to manipulate product ratings? Or what about all those interactions you have on social media? Are those even real people you're connecting with? Or just cleverly disguised bots trying to push certain narratives? This is kind of terrifying when you really start to think about it. Like, how can we even be sure that the information we're taking in is actually real anymore? Chaos Labs seems to think that oracles are the key to solving this whole thing. And what makes their approach so fascinating is how they're pushing the boundaries of what these oracles can do. They're incorporating AI, but in a very targeted and controlled way to really verify information and guarantee its authenticity. Okay, so let's unpack that a bit. How can AI, which is often seen as part of the problem here, be used to actually build trust? Well, I think it all comes down to recognizing that AI is ultimately just a tool. And like any tool, it can be used for good or for bad, right? What Chaos Labs is doing is exploring how to really leverage AI's strengths, like its ability to crunch massive amounts of data and spot patterns that we humans might miss to actually fight back against its potential for, you know, deception and creating chaos. So instead of building AI that creates deep fakes, they're building AI that can spot them. That's actually, like, kind of genius when you think about it. Exactly. And one of the most interesting areas where they're applying this approach is in the world of, get this, prediction markets. Okay, prediction markets. Now, for those of us who aren't exactly Wall Street wizards, break that down for me a little. What are we talking about here? Sure, so imagine you're trying to, like, forecast the outcome of some future event, right? Like an election or maybe the success of a new product or even just the outcome of a big game. Well, a prediction market basically lets people buy and sell shares that represent those different outcomes. Okay, so, like, if I think a certain team's gonna win the Super Bowl, I'd buy shares in that outcome. And the price of those shares would then reflect, like, how confident the market as a whole is in that outcome. Exactly, it's like a real-time gauge of collective wisdom, you could say, where the price of those shares directly reflects the perceived likelihood of that outcome actually happening. And that's where oracles come in. I'm sensing a but coming here. You're right to be cautious. So, in a traditional prediction market, you need some way to actually determine what the real outcome of that event was, right? And then make sure the winners actually get paid out. And this is traditionally where oracles have been vulnerable to, let's say, manipulation. Because if the entity that's controlling the oracle decides to, you know, fudge the results a bit. Exactly, they could essentially manipulate the entire market. But what Chaos Labs is doing is piloting a new approach that uses AI, specifically these things called large language models, or LLMs for short, to try and create a more trustworthy system. LLMs, like that Google Gemini model we talked about earlier, the one that was supposedly rewriting American history. I'm not sure that inspires a ton of confidence, if I'm being honest. I get it, it's a valid concern. And it really highlights why Chaos Labs is being so careful here. They're not just, like, handing over the keys to the castle to some all-powerful AI and hoping for the best. They're using these LLMs in a very specific and controlled way. So, instead of asking an LLM to just tell us who won the election, they're being much more, like, precise with their instructions. Exactly. So, for example, they might feed the LLM articles from a bunch of different reputable news sources and ask it to specifically extract the declared winner from each source. That way, the LLM isn't trying to interpret the news itself or make any subjective judgments. It's just pulling out that one specific piece of data. So, it's like you're giving a really specific recipe to, like, a super fast cook. You're controlling the ingredients and the steps in the process. So, the final outcome is gonna be much more predictable. That's a fantastic analogy. And it really highlights what's so cool about Chaos Lab's approach to using LLMs for these oracles. They're mitigating the risk of bias by focusing the AI on very specific, tightly-defined tasks. Okay, but didn't Chaos Lab have some real-world example of how things could go wrong with these oracles, even with good intentions? Something about the Venezuelan election, if I remember correctly. Oh, you're thinking of the 2024 Venezuelan election market on PolyMarket. That one definitely became a cautionary tale about how even when everyone's trying to do the right thing, things can still get messy in a hurry. Yeah, tell me more about that. What happened exactly? Well, the election itself was incredibly contentious. There were these conflicting narratives about who'd actually won, with the incumbent party obviously claiming victory based on, you know, their version of the official results. But at the same time, you had the opposition disputing those results, alleging all sorts of fraud, and claiming victory for their candidate. It was a mess. So, a real political showdown then. What did that mean for the prediction market on PolyMarket? Well, the market was relying on an oracle to ultimately decide the outcome and then settle all the bets, right? But the problem was, with so much conflicting information flying around and such different interpretations of the actual event, which narrative was the oracle supposed to prioritize? Ah, I see the problem. Who do you trust when the truth itself is so up for debate? Precisely. And this is where things like the resolution criteria, meaning how the oracle actually makes its decision, becomes super important. In the case of this Venezuelan election market, the lack of really clear guidelines on that point led to a ton of controversy, with some people accusing the oracle of being biased in how it ultimately made the call. Right, so this example really drives home the point that these oracles are only as good as the rules they're following, yeah. Absolutely. And it underscores why, what Chaos Labs is trying to do, creating a more transparent and trustworthy system for getting to the bottom of these real world events, is so crucial. It really makes you wonder though, if AI is truly the solution here, or if it's just adding another layer of complexity to a system that's already pretty complex to begin with. Like, how can we be sure that Chaos Labs' approach is actually scalable? Could it really be applied to, like, the entire internet, for example? Those are excellent questions. To be honest, no one has all the answers figured out yet, but what's so exciting about Chaos Labs is that they're not just building technology, they're putting together a team of absolute rock stars with the knowledge and experience to tackle these very questions head on. And this is where I get to geek out a little bit, right? Because you're talking about their, like, all-star lineup of new hires. This team they're putting together, it's like something out of a movie, like a heist crew or something. I mean, you've got this ex-fighter pilot, a quant researcher who's diving into sentiment analysis. Who else is next? Do they have, like, a master hacker with a secret lair? Well, their, quote unquote, master hacker is actually a top-ranked Dune analytics wizard. I think he goes by Whale Hunter online. But yeah, you're right, this team is seriously impressive. Like, take Max Budden, for example. He's actually at Columbia right now, getting his MBA, focusing on finance. But his resume, let me tell you, reads like a Jason Bourne movie. Okay, so we're not talking about your typical, like, finance bro here. Not even close. This guy spent eight years as a pilot in the Israeli Air Force and then on top of that, he worked as a senior project manager for the Israeli Ministry of Defense. Wow, that's a serious background. I can definitely see why Chaos Labs wanted this guy on their team. Talk about understanding risk. Okay, who else is on this, like, dream team of theirs? Well, they also brought on Asa Turkoglu. He's a quant researcher based in Istanbul. And he's got this really solid background in traditional finance, having worked at Akbank and Borsa Istanbul. But what's really interesting about him is his work with what's called alternative data. Alternative data. Okay, now that has a nice ring to it. What exactly are we talking about here? So he was working on building a Turkish language lexicon specifically for evaluating, like, local content on Twitter. Basically finding ways to analyze sentiment in different languages, which is absolutely huge in a global market like crypto. Especially when you're trying to tell if that sudden wave of positive sentiment is, like, genuine excitement or just a bunch of bots trying to pump and dump a coin. Exactly. Okay, makes sense. Who else have we got? So then there's Dave Fatcary. He's a Swiss data scientist who honestly seems like he was, like, tailor-made for Chaos Labs. I mean, this guy's expertise is this perfect storm of, like, machine learning, blockchain tech, and DeFi. So he definitely speaks the language then. But is he more of a hands-on builder or more of a, like, theory guy? Oh, he is a builder, for sure. He actually founded his own on-chain asset management company called Coreo Finance. So he's putting all that knowledge into practice. Got it. Okay, I'm starting to see a theme here. And it's not just people with, like, these crazy, impressive resumes. They're all doers. Like, these are builders, people who actually make things happen. That's a great way to put it. And then, of course, there's our mysterious whale hunter, the Dune analytics wizard. This is someone who understands the DeFi community from the inside out. So they're like the eyes and ears of Chaos Labs, making sure they're building the right tools for the people who need them most. Exactly. And then last, but definitely not least, we've got Robert A. Noma. He's their Web3 expert. And he's been super deep in the blockchain space for a while now. Like, we're talking holding key roles at places like Horizon Labs and Livepeer, even serving as an ambassador for the Global Blockchain Business Council. Okay, talk about a wealth of experience, not to mention connections. This guy has his finger on the pulse of the entire Web3 ecosystem. Totally. What we're looking at here is a team that's not just capable of building cool technology. They're equipped to actually change how we approach this whole idea of trust in our digital world. It's actually pretty inspiring when you think about it. So as we kind of wrap up our deep dive into Chaos Labs here, what's the one big takeaway you're hoping our listeners walk away with? Honestly, for me, it's about recognizing that we're at this crucial turning point. AI has the power to either completely spiral us into chaos and distrust, or it can be this incredible force for transparency and accountability. And it sounds like Chaos Labs is placing their bets on the ladder. Like, they're not just building these oracles. They're building this future where we can actually trust the information we're consuming, where authenticity actually means something. Exactly, and that's a future that's definitely worth fighting for. Couldn't agree more. Thanks so much for diving into all this with me. It's definitely given me a lot to think about. And for everyone listening, stay curious, keep questioning everything, and remember, sometimes a little bit of chaos can lead to a much more trustworthy future.

Listen Next

Other Creators