Details
Nothing to say, yet
Big christmas sale
Premium Access 35% OFF
Details
Nothing to say, yet
Comment
Nothing to say, yet
The podcast discusses problems with AI in the medical field and computer science. Privacy is a major concern, as medical information is now stored electronically, making it vulnerable to hacking. Computer science also struggles with hacking and privacy due to the nature of coding. ChatQPT is a potential source of privacy issues, as information could be stored and disclosed to others. Possible solutions include using essential data only and giving users control over their information. Another issue is AI's lack of abstract thought compared to doctors. Computer science hopes that advancements in technology, like quantum computers, will improve AI's ability to think abstractly. To protect privacy, users can adjust settings and be cautious about the information they disclose. Current regulations for privacy protection are lacking. Hello, hello, this is Ulysses Preciado and this is Biotech. This is the podcast where we talk about and disclose information that has to do with technology and its incorporations into the medical field or just the sciences overall. Today we're going to be basically talking about some problems that are associated with AI in technology, or not technology, AI in the medical field and possible problems in the computer science field and coding field that could be used in order to sort of fix these problems. All right, so first let's talk about the history and what these problems actually are. So the first one is privacy. In the medical field, for the most part, everything in the past was done through medical papers and basically only physical copies in the kind of hospital existed. So to get that information, you had to go to the hospital or basically explore the place physically to be able to get that information, whereas now it's all integrated in technology. So a big problem that hospitals are now facing are how to keep people's information safe and keep it from the public basically, or not the public, but people that can hack into these databases and steal that information. Now when it comes to computer science, what is the history with the problem there? Computer science has always kind of, computer science and coding has always sort of been struggling with the problem of kind of hacking and privacy just because computer science is based, or coding is based off an algorithm and basically if you figure out a way to kind of understand the algorithm, almost like a language, it unlocks the ability to be able to interpret and kind of mess with that coding, which is how people essentially steal information. Now if you're putting information into ChatQPT and it does get used for future training of the model, it's possible that that information could show up in other people's chat threads in the future. Chat audio basically discloses one of the major issues when it comes to ChatQPT and basically the fear that ChatQPT will store your information and basically re-disclose it or re-introduce it to other people, or that that information could be leaked somehow or revealed. Now what are the actual implications that we can use in computer science to be able to kind of solve these problems? Well, in the article, Ways to Preserve Privacy in Artificial Intelligence, a computer science expert stated that it's really difficult to be able to kind of do this sort of information solely because AI is ran off of a database and this database and all this information has to go somewhere in order for it to properly function, especially as of right now. So when it comes to things like ChatQPT or OpenAI, this all goes to an essential database and basically they have all access to that information and can do what they will. So at this point right now, if you want to be able to use their technology, you kind of have to unfortunately deal with this. Now in this article, he did disclose possible solutions and probable courses of action for the future in order to help with this. One of this is use good data hygiene. Basically having companies only use essential information for what they need to answer the question or be able to give the information that they're asking back. So what this would look like in the medical field is the AI being able to go through basically a patient's chart or something along those lines and take out all the information that's all personal, like maybe their name, their age, their height, all this personal information that they might not need and keep it out of, I guess, their database and only use what's essentially symptoms or problems right in front of them to be able to disclose that information. So that would help keep people's privacy. Another big factor he discussed was to give users control. This is basically just allowing users the option to disclose how much information they would like released to the AI and giving them more aspects to be able to gauge onto when it comes to their privacy and what this AI source is able to do. Now the secondary problem which is addressed with AI and medicine is the thought of abstract thought. So when it comes to abstract thought in the medical field, doctors go through years and years of vigorous study, schooling, and then shadowing to be able to kind of disclose or go through information and use it sparingly and figure out where or what's necessary and what's not necessary based on what the patient tells you. Well, when it comes to AI, AI doesn't have the profound ability as a doctor to be able to prioritize the information with symptoms that a person may disclose and effectively use what's good and what's not or what's good and what's essential to be able to have a prognosis for the person and what's not. So the coding field and computer science has long been trying to develop AI into a free thinking abstract thought sort of machine and they believe things such as a quantum computer or some other massive computing technologies will be able to achieve this. But for now, the capabilities they have right now is just AI is basically just reciting information that's been fed instead of actually being able to think and process that information as we think it does. So computer science and coding believe that with the improvement of these technologies through time, we will actually or it will actually become better and be able to produce these abstract thoughts which will basically eliminate the fear of the misinterpretation that AI could do and how it differs from an actual physical doctor. Finally, what can you do or what are some actions that you can take to protect your privacy and security right now and build on this idea of protecting privacy? Well, there's actually settings that you can go to on chat, GBT and privacy and security which allow you to basically protect all your previous search histories and all your history that you have currently on chat, GBT from being reused, which is one way you were able to protect sort of the privacy of that. Another thing that you could do is basically really regulating yourself and what information you disclose on to chat, GBT or any of these AI, open AI sort of programmings for as of right now there aren't really much regulations in the sense when it comes to privacy and keeping your information safe.