The podcast discusses how technology is not neutral and bias-free, but rather shaped by racial and gendered logic. It emphasizes the importance of questioning who benefits and loses from technological innovations and ensuring equal accessibility. The discussion also explores the validity of granting AI rights and the ethical considerations of enhancing the human body with technology. It raises concerns about the environmental and social impact of technological advancements and suggests the need for companies to prioritize sustainability and involve diverse perspectives in product development.
Hi, everyone. Welcome to episode one of technology in our time. In this podcast, we're going to be talking about how technology historically was believed to be neutral and bias free. But as I have learned throughout my course, race and technology, that that is not the case and that racial and gendered logic shape technology, technological innovations. When looking at media, one must go beyond the promise of innovation to assess how inequalities are programmed into codes and algorithms.
You must ask important questions such as who benefits and who loses through these innovations. Is this technology equally accessible and what effects do these technologies have on our rights? These questions will allow us with the goal of creating a more ethical approach to technological design with social justice in mind. With me, I have three of my friends that are going to be discussing various questions that I have. I'm going to have them introduce themselves one by one.
But first, I'm going to introduce myself. My name is Marcus. I'm a first year at UC Santa Cruz and I am a computer science major. I'm Kester. I'm a first year at UC Santa Cruz and I am a biochem major. My name is Owen. I am a first year and I am a computer science major. My name is Ethan. I'm a first year as well, and I am a biology major with a Spanish minor. So the first question that we're going to be talking about is.
Artificial intelligence is rapidly advancing and becoming more accessible on smaller scales to the public, such as tragedy. As a result, some people have become attached to programs such as tragedy, as they can simulate real complex people. Even if it is just through chat. That being said, people are going to fight for tragedy. He writes. Do you think at this stage of AI that this is valid? And if not, how complex does AI have to become for AI to have their own rights? Or is right for AI even something that's plausible at all? All right, I'm going to start it off.
I think that at this point, tragedy B.T. definitely is not complex enough. It's not complex form of form of AI enough to have their own rights. I believe for some to have their own rights, they need to like, you know, have their own free free. They can like think their own thoughts, their free range of actions. And they can they can they can have like real complex interactions of other people. While tragedy B.T., you know, can interact with people on a level such as like just text.
I think that it needs to go a lot deeper than that, including like actual like real life interactions. So just like, you know, like physical interactions with other people. And have like I think a big part about like what gives people rights is the fact that they can experience emotion on like a complex scale. Which is something that you can't do. How do you guys feel about having rights? I think in terms of rights, it kind of comes from the idea of like government and laws.
And when government creates a laws and policies, if those laws and policies have an effect on something, then that's something that it has effect on should have a right or say. But since since I can't have a right or say on something that the government does, all all I is is a human made program and a series of inputs and outputs based on the words that are put into the system to answer questions. I don't think it's complex enough to be affected by any laws or policies that the government instills or creates.
I personally think that it's silly to give AI rights at this point. When I think of rights, I think of living things like animals, plants, humans, and chad, UBC and similar AI is just not a living thing. And it's not advanced enough or no sentience. So it doesn't have its own thoughts. It's not it's not human like yet. Hopefully it doesn't become human like thing. That'll be. Yeah, I think that'll be pretty dangerous. But yeah, I don't think we're at a point to give AI its own rights.
Yeah, the whole idea of like emotions and like just like actual human, like I guess like characteristics and everything. It's like I don't think I like at that point at all. And again, like we don't hope it is because we don't know how we can really control that. But that being said, it's like they don't I don't think they're going to need rights. But it's more of like we just have to control like what we're doing with it and make sure that it doesn't get to that point where it needs rights.
I want to introduce like a different angle on this topic. Going off of kind of we had our work ethics or technology ethics workshop for the class. And we talked about the question of nature versus nurture or the question is, are people born a clean slaver? They learn everything and become who they are based on their interactions with other people. Or are they like already born with something like instincts and other stuff kind of programmed into them as like humans? All humans are.
That's a question that people have been discussing for a long time. And if we're going to relate this back to AI, right? Like if we go with the assumption that people are born as a clean slate and they learn everything through interactions of other people, does that make AI kind of the same as us? Because what AI does is it pulls all information like through the Internet, right? All of the directions we have of each other.
And it uses that as data to then like create responses, right? Like through chat, whether it's through chat or in the future when we become more advanced. Like they simulate their emotions through all this data they have on the Internet. And if we as humans, we are kind of limited in how many people we can interact with in our life. So we kind of learn who we are, become who we are through mainly our parents and our family and our friends.
But AI has access to the whole Internet, right? Everyone, everyone's different opinions, personalities that they are constantly uploading. Does that make, does that, could that possibly mean that like AI is even more human than we are? If they have access to all this different data that like millions of people are uploading every day to the Internet and we only have our friends and our family to base our personality off of. Does that make them even more human than us? How do you guys feel about that? I don't think it necessarily makes them more human.
It's kind of, you can, you can describe the ocean to someone all day long. You can describe the feeling, the temperature. If you taste it, you can describe the saltiness. But you'll never truly know what the ocean's like until you step foot. And, you know, or same thing with like loss. Like you'll never truly know what it's like to lose someone until you know like unconditional love and things like that, which AI can never do. And I think just sheer access to data on what something is like doesn't necessarily trump human experience.
Yeah, that's a great, that's a great response, Ethan. I agree with that, or with what he just said. You know, AI is just like a big library that just like pulls all this information and, you know, like uses the algorithms that it's coded with and all that to like generate these responses. Okay. Yeah. Yeah. And then part of being human is learning through experience and actually trying to do things, experimenting. And yeah, that's just like a big difference between AI and humans is just being able to experiment with things and then learn from that, rather than just reading books and gathering information that it's being told.
Yeah. AI just like, even though they have access to all that information and stuff, it's like, it's not them really like, I guess you could say, understanding it. It's more of just knowing it. And again, what Owen said is like, humans learn through experience. And like, again, technically, like just gathering information isn't like experience, you know? Like humans like experience it like firsthand and like, that's a big thing to like, yeah, just like learning and all that.
So I don't think it really changes the question. Cool. Thank you guys for sharing your responses. Let's switch to a different question. So the definition of a cyborg is a fictional or hypothetical person whose physical abilities are extended to be beyond normal human limitations by mechanical elements. That being said, some people consider all of us to be cyborgs already as we use technology every day in our lives to become more efficient and get things done. Others disagree.
Should there be limits on what artificial changes could be made to the body? Should someone be able to enhance their body in any way they like? And like I said, should there be a limit to what people can do to their body? And do you consider yourself a cyborg already? In terms of me, right, going off of the definition of what I said a cyborg is. Well, yes, like we do use technology like every day, such as like our laptops, our phones, our computers to get things done and like be efficient.
I feel like the line between becoming a cyborg and not and being human is that like something actually has to be like implanted kind of into your body to like enhance your physical attributes. Right. Like if we're using technology to like write a paper or help analyze like some data, I think that's a little bit different than like someone using like technology to, let's say, like implant something in their eye to make them have better eyesight.
So I think there's a little bit of a difference between that. But in terms of like limits in the future of what people can do to their body, I feel like there definitely definitely needs to be some sort of regulation on what is allowed and what isn't. If you like look at media, like some dystopian, like movies and shows where people are like half robots, half humans, like things can get like really ugly fast, especially like if they use, you know, these enhancements in the wrong way for violence, like people can like become super humans.
Right. With like this technology. And obviously, if that's not regulated, that can be very unsafe. And I don't know. I mean, I don't really have a clear answer on like what needs what the straight line is between what people can do, what people can do. But I also agree that like having like things put into the body, like there's like mechanical like pumps that help pump your heart right to keep you alive. Right. That technically means you're a cyborg, but that's just keeping you alive.
So like I definitely think like technology has a lot of benefits for us in terms of like health and like health care and helping people like stay healthy. But in terms of like just having technology enhance our abilities beyond like what is normal, I think that's like a really great area that we need to talk about. Yeah. Yeah. Like what you said, like, yeah, the idea of like I think of cyborg like same thing, just like kind of it's like implanted in you.
And the thing is, like, you're right. There's a lot of gray area because like technology is like both beneficial and it obviously could have like its downsides. And like especially with like health and stuff like technology has already helped a lot, like even like without like technical like technically like implanting like any like thing into people. But, yeah, the idea of like, like you said, like pumping someone's heart with like by implanting like some sort of like technology in someone like you could help.
I was thinking more of like maybe like even implanting like something in like someone's like leg or arm, you know, just to like help with like movements. But, yeah, I think there is it is really hard to find like a like a restriction or like when to like call that restriction, because I feel like like in our society, people are just going to try it until like, like, either way, like, like things can go bad. And then like once like a certain amount of like bad things happen, that's when like people start figuring out that like fine line.
And like, so I'm not sure. Oh, and any thoughts? I think that Marcus and Kester covered everything that I wanted to really talk about. When I was taking writing one, our class focused on disabilities, justice. And one of the authors of our class readings, her name is Alice Wong. She is a disabled activist and she is proud to call herself a cyborg because of all of the, you know, enhancements, you know, she has to help her function and live a healthier and like just overall more comfortable life.
So I think in those cases, especially when it helps people improve their quality of life, it's okay to have these to have these features. But if there ever comes a time where people can use these technological features to to be violent and, you know, just cause damage, then it's obviously not okay. And personally, I've had braces, and I think that's, you know, unnatural. So I guess I am a cyborg to answer that first question. Yeah, me personally, I don't think there should be any limits placed on technology and its benefits to humans.
We're already kind of seeing some real actual forms of cyborg and of people being cyborgs. We have the release of Apple Vision Pro, and, you know, that's almost as revolutionary as the very first iPhone. So I have a feeling we're going to be seeing a lot more technological advancements in that sort of sense. I don't think it's necessarily an evil thing or a bad thing to make humans' lives more efficient. And I don't think there should really ever be a limit placed on that, because at the end of the day, technology, it's created by humans, and it can only do what humans want it to do.
So I don't ever see it becoming more intelligent than a human in making its own cognizant decisions. But yeah, I just don't think there's ever going to be a point in time where it's going to be limited. Maybe if it makes its way into the hands of warfare, but other than that, yeah. And us humans, we already know how to limit what we can and can't do in warfare. As an entire globe, we already decided that nuclear warfare is a no-no, because it would ultimately be the end of the world.
So I think us as humans, we're very capable of knowing our limits when it comes to what we can and cannot use and what's justified and what's not justified. Cool. Thank you. It's interesting that you mention or you say that there shouldn't be a limit on what us advancing technologically-wise and letting us just keep on becoming more advanced as a species. But when we were going throughout my class on race and technology, we talked about how the society we live in right now is a very capitalistic society.
The things that people value in society are growth, profit, innovation. So people are always just pushing, pushing, creating new things, trying to make as much money as possible without thinking at all really about what sort of repercussions there could be. And if we're talking about technology, there's a lot of things that, like, wow, yeah, it's cool they're making new Apple classes that can put an interface up in front of you. But do you ever think about where these parts are being made or where these parts are coming from? And who deals with all this e-waste and all these pollutants that are being created and poisoning less wealthy populations? So we talked about how some people in third world countries, their rates of cancer is super high just because all these companies like Apple, Google are dropping off all their e-waste to them.
And their job basically is to just go through all that e-waste, burn it up, try to get a little bit of copper in the wires or in those circuit boards. And they're basically poisoning themselves, but that's their job. That's all they can do because that's the life they live in. So do you think that the way society is set up right now, as a very capitalistic society, do you think there needs to be some people or some sort of committee or something to slow down and ask these questions about, is this okay? Can this company create this or should they create this? And while it might seem really cool and innovating at the time, will there be drawbacks to this being created? Who's going to ask these questions, basically? Like I said, we value growth and profit, but we don't really think about the repercussions that can come with these things.
So in what way can big companies like Apple, Google, high-tech companies slow down and think about all the different effects they have on different people? I feel like people who are poor, a lot of poor communities are colored and come from mainly non-white communities. So that itself, if big high-tech companies are poisoning these lower-income communities, it just isn't fair to those people and it's not ethical. Do you guys have any thoughts on how a company like Google or Apple can become more equitable in that sense? Yeah, so I've always thought that there's never been a trillionaire.
The first trillionaire will probably come from someone who knows how to break down safely these products that are created by big tech, like Tesla batteries. They're made out of lithium, yet all these Tesla batteries, they're just being collected in waste. So the first trillionaire, I think, will be the person who can figure out how to break down these lithium parts and all this e-waste that's being created by these products. But in terms of that, I think we're making pretty good strides.
Plastic is very recyclable, and most of these products that are being put out by these tech companies, they are plastic at the end of the day. I think if the tech companies can just figure out how to reuse parts without putting most of their parts into waste and stuff like that, then it would make society feel better about buying products like that, I feel like. I'm going to be a bit more pessimistic here. Pessimistic, right? Yeah.
And I'll say that the reason why they haven't found any sustainable way to mine and manufacture these products is because there is none. Or at least there's nothing that would make them profit. It would probably cost more to mine these materials if there's a more sustainable way than paying third-world countries pennies to mine these resources. So if there is, or if someone does come up with a healthier way to manufacture these products and cool. But personally, I don't think there is.
And if there is, then they're keeping it a secret because it won't make them as much money. I don't have much to add, but Ethan and Owen brought good perspectives to it. I think I do think more towards what Owen's saying, though. Actually, no. I'd say it's equal. Because companies have specific teams that specialize in looking into sustainability and everything. So I'm sure there are companies that are truly trying their best. Like what Owen said, there are companies that just want to make money.
So, yeah. Cool, yeah. I think it's imperative that these big companies or any company that's planning on creating something has these teams that are focusing on sustainability. And I think a big part of that is we have the software developers that are coding up what the boss says. Like, oh, you're going to make this feature. But we also need to have teams or people maybe in the humanities department that will ask these questions and just basically, in a way, slow down this constant wheel of profit innovation.
Slow it down and allow people to ask questions like, oh, is this going to create a lot of waste that is going to impact these low-income communities, these low-income countries? Or just have more people involved. I think the more people involved in the creation of these products, the more perspectives there are, the more people are going to create a product that is as equitable as possible. But that being said, nobody and no one is going to be bias-free or racist-free or whatever the case is.
So these products are being made for people who are biased. So inadvertently, the technology they create are going to be biased. There's just no way to get around that. So that being said, with as many people working on these projects as possible, it's going to create technology that hopefully in the future is going to be less biased than the technology we have now. The last question I'm going to be talking about is how technology has personally affected us, more specifically in a negative way.
People suffer from social media addictions, phone addictions, and a whole plethora of other things. The algorithms created to run these machines and programs are intentionally built to be addicting. Should this be regulated? And people who do not have access to help or support are obviously at a higher risk to experience these things. How can this be prevented? And can companies exploit people's addictions to make profit? Owen's going to start it off. I do agree that these apps can be very addicting.
When I had them downloaded, I did have several moments where I'd spend multiple hours scrolling endlessly. I can definitely see how these apps have changed over time to become more addicting. Probably a decade ago, especially on Instagram, they didn't have these public feeds where you can look at what other people are posting. I think the content is also a big part of it. People have started to realize which content garners more views and clicks. People outside of the companies have also exploited this.
In terms of how we fix this, there are other things that are not beneficial to society like alcohol and tobacco that are still legal. Companies know that they're addicting and they harm people's lives, yet they're still allowed. I don't really know how to address this, but I think it should be addressed in the future. I think that's it. I don't know what else to add. I think it comes back to the fact that society itself is very capitalistic and people just want profit.
Companies are going to do what they can to create algorithms that are going to make them the most money and make people use their programs or their apps the most. The only real way to break out of this cycle of companies making stuff purposely to be addicting is for us as a society to not value profit so much. Obviously, that's a very hard thing to do. If companies only care about profit, they're not going to really care as much about what it does to people as long as they're getting a big paycheck at the end of the day.
I don't know that, Marcus. It's not just Instagram that profits from their own company. A lot of small companies get their business from posting on social media. I don't know if you guys know the guy on TikTok that does food reviews. Because of that, he's saving all these small businesses by just reviewing their food. He's not getting paid at all by it. Yes, while Instagram and TikTok benefit from their own algorithms, there are hundreds of small companies that also benefit from the TikTok algorithm and the Instagram algorithm as well.
We just had the TikTok trial for it to be banned. A big reason that it probably didn't get banned and probably will never get banned is because of the fact that so many people use that for a living. To stop this issue, it's definitely up to the user for one thing. At the end of the day, like what Ethan said, there are some benefits that social media is used to spread awareness for many things as well.
The whole premise of social media companies is obviously to curate their content for the user. Again, they want to make money. I truly think it's up to the user. Personally, I still have TikTok and Instagram. There are times where I'm just scrolling very long like Owen said. It is really addicting. For example, TikTok, they start incorporating little pauses throughout your scrolling where you should get off the screen and stuff. I am glad that they're starting to take initiative.
At the end of the day, I think more like individual liberty is what Ethan said. To stop overall growth for addicting social media issues. I have one more thing to add. I think that social media now is like a whole new sector of our economy like alcohol and tobacco. Banning it I think will have a similar effect and a similar outcome to the Prohibition Act. It was so unpopular even though alcohol is obviously bad for people.
There really is no health benefits to it. It's just a popular thing and it makes a lot of money for people. It's like a whole business. Just banning it wouldn't bode well with the economy and with society as a whole I think. Our great-grandparents were in the Prohibition. They had speakeasies and all these other things to find ways to fill their addiction and to fill their craving for alcohol and things like that. Whereas nowadays we have VPNs.
We have other ways of still accessing social media even if our country were to ban it. It's like what restrictions can you place really? You can't. You can't. People are going to find a way to get what they want. Whether it's alcohol during the Prohibition. Whether it's if we ban guns. Criminals are still going to find ways to get those. Whether it's social media. People will still find a way to download VPNs. It's just one of those things.
If you truly feel like it's a problem you need to take it into your own accountability to not have those. If you have children to not allow them to have it or place restrictions on your children as well. At the end of the day we do live in a capitalistic country like you said. There's just no plausible way to effectively restrict social media on an entire country let alone a country with 300 million people in it.
I 100% agree with all you guys. I think when it comes down to it it's going to be up to us as individuals to stand by what we want to see and what values that we hold. If we don't support social media and the addictions that can arise from it then we shouldn't be using it. It's going to start from us as individuals banding together and agreeing not to use social media or not agreeing not to use social media in that way.
As more people begin to support each other in this campaign of protesting social media or certain aspects of social media it's going to draw attention to these companies. We as individuals living in a capitalistic society we need to value other things other than money. Value things other than money meaning money shouldn't be just our end goal in life. We have to have other goals in life. We have to do what makes us happy. We have to do something that we're passionate about.
Money is a big part of living. You need money to live but when it comes down to it money can't buy you happiness. We as people need to make our decisions based on what is going to make us happy and not what's going to give us the biggest paycheck at the end of the day. The more people that do this the more people we as a society are going to slowly hopefully begin to change what we value the most.
Right now it's profit but it doesn't mean in the future that we can value other things more than that. Do you guys have anything else to add on or say? Nope. Alright, cool. This is going to be the end of episode one. Maybe there will be another episode, maybe there won't but thank you for listening.