top of page
Gradient Background

Think Like a Machine: The Power of AI in Today's World | Gurugrah





Machines That Think | Gurugrah


Machines That Think

Scientists were inspired to consider the advantages and disadvantages of digital computers by their proliferation. In 1936, Church and Turing independently demonstrated that digital computers cannot solve all problems. There were other unanswered questions, one of which still affects the society today: Can Machines think?


Turing wrote the book "Computing Machinery and Intelligence" in 1950. He proposed a practical test in it that is still commonly referred to as testing machine intelligence: the test of Turing. But there's more in the paper. It provides a summary of the most common arguments against AI. Many of them continue to be taught today. With each new computer challenge, others become ever weaker. In addition, Turing's proposal to instruct machines in the same way that we instruct children makes him one of the pioneers of artificial intelligence thanks to his paper.


Computing Machinery and Intelligence is a cutting-edge article that's incredibly ahead of its time, and most importantly, still worth reading. Since it is one of the earliest jobs in computer science and artificial intelligence, it can be tackled without much background knowledge. But it also touches on so many aspects that even the most advanced AI reader will be amazed. This blog post is just a preview of the article; I encourage everyone to pick it up and read it for themselves.


By the 1950s, the term "computer" was more commonly used to describe people, mostly women. It was a job description for someone who did calculations with spreadsheets. Human computers reached their peak during the world wars. The rise of digital computers began its decline. Digital computers soon overtook human computers, and by 1954 most humans had already been replaced by digital computers. Turing's article was published at this very time of transition. I started working two years before the first electronic computer, the Manchester Mark I. But it was still humans who did most of the calculations. therefore contains several sections that explain to the reader the notion of a digital computer and describe it as a machine that follows a set of human-computer rules.


Can machines think?

To answer this question, the terms "thinking" and "machines" should be explained. But instead of doing this, Turing suggests replacing the question entirely. Instead of asking whether machines can think, we should ask whether the machine can win the imitation game.


The imitation game

The imitation game requires three players, a woman (A), a man (B) and a questioner (C). A and B are separate from C in another room. C's goal in the game is to find out which of the two players (X and Y for him) is a woman. C can ask A and B any questions they need to answer. Tries to deceive the interrogator by pretending to be a man or by implying that B is a woman. On the other hand, will try to help C identify the woman correctly. All communication is in writing so that the voice does not reveal the gender of X and Y. Turing suggests replacing A with a computer. Can you imagine a digital machine capable of fooling a human interrogator as many times as a woman can fool a human judge into believing that she is a man? In Turing's eyes, that's the question we need to ask.


What does it take to win the imitation game?

From today's perspective, it can be seen that several AI sub-disciplines would be required for a computer to play the imitation game. First, natural language processing would be required to communicate with the questioner. Some form of knowledge representation and automatic reasoning would be required to reflect on the questions asked and provide satisfactory answers. Ultimately, machine learning will be necessary to allow the machine to adapt to new circumstances. To take a step back here, Turing wrote his work when even the very notion of a digital computer was unfamiliar to most people. I assume that today's reader is familiar with these machines, but there is more to Turing's foray into computing.


On Computing

You may have heard of Turing machines. In a previous article, Turing showed that any defined state machine can mimic the behavior of any defined state machine given enough resources. All computers are the same. The unit in your microwave could, with the right code and enough resources, perform the same calculations as your fancy MacBook Pro. If we follow this logic, we imagine a full Turing machine (like digital computers are). can If the imitation game succeeds, we have answered the question for all digital computers. And Turing believes such machines are possible.


Contrary views

From today's perspective, however, we see that machines have shown that they can do some of the X's, it seems less likely that they can't do the others. human. Of course, there is also the mathematical objection. Certain things are mathematically impossible for a machine to solve. Turing claims that humans aren't perfect either, and they often make mistakes. Even if a computer can't solve the problem, it could be programmed to give a random answer and pretend to have made a mistake when it was wrong. Lady Lovelace is generally credited with being the first woman programmer. He wrote algorithms for the "Analytical Engine", a mechanical computer. In his memoirs, written in 1842, he states that "The Analytical Engine has no claim to produce anything in ."It can do anything we can tell it to do."But what if we tell the machine to learn and create? To refute this argument, Turing introduces machine learning, to which he devoted an entire chapter in his work.


Artificial Intelligence(A.I.)

Artificial intelligence is a term that most people in the modern world are familiar with. Probably better known as AI, artificial intelligence is a hotly debated topic. Do we want AI? Me in our daily life, right? We've heard a lot of good things about how AI can improve our daily lives and bring tremendous value, but we've also heard the other side of this comment; Orwellian apocalyptic tale. What if artificial intelligence decided it would be better if humans ceased to exist; Now that wouldn't be good, would it? But what exactly are machine learning and AI, and what are the pros and cons of the technology? Artificial Intelligence, as the name suggests, is intelligence that can be artificially programmed or interpreted, usually on a computer system can also be thought of as a human-made mind-power computer that allows a machine to simulate human intelligence and problem-solving skills. When a problem or question is asked, the device can reply with relevant information to help solve or answer the question. Artificial intelligence is programmed by "feeding" a large data set into a computer system and training it to recognize patterns in the data to answer questions about the data accurately and intelligently. The process of "feeding" or training data is done through a mechanism called machine learning.


Examples of A.I.

Now that we've discussed the symbiotic relationship between machine learning and AI what are some real-world examples of these technologies at work?


Voice assistance

One of the most well-known examples of AI is modern language support such as Google Assistant, Amazon Alexa, and Apple Siri. These voice assistants will answer your question. Perhaps you have already asked: "How is the weather today?, "Take me to the store" or "Call Mom". And in fact, your phone executes all these commands precisely and almost immediately. These AI systems were fed millions of data points and algorithms to determine how best to meet some of the most common needs of mobile phone users.


Search Engines

Every day most of us use the internet in one way or another. Whether you're reading the news, looking for new dinner recipes, or planning your next vacation, search likely plays a big part in your Internet exploration. The most well-known and popular search engine is Google. Google search is a great example of an AI system that finds personalized and specific information that it assumes you want to see.


The internet is a powerful search engine made up of millions of resources that allow us to find the exact information we are looking for. Google Search learns your habits, preferences, location, and more to give you the information it thinks interests you most. The machine learning aspect of Google AI includes reading and analyzing your search history, time spent on websites, geolocation points, and much more. Once evaluated, AI fully understands who you are and how to make your search experience fun and user-friendly.


Online Chat Bots

You may have been shopping online and found that while using the site you appeared to be speaking to someone about a particular promotional offer or shipping question. You may have spoken to Person AI. A system for answering product-related questions. These chatbots are programmed to analyze frequently asked questions and then respond accordingly.

Machine Learning

Machine learning is an application of AI that allows a system to train, learn, and improve based on experience rather than being directed to do so. To do this, a group of engineers determine the purpose of a particular AI. Application and create clear parameters for it. They will specify what the AI will do and teach you everything there is to know about the subject by giving you information. The information can be in records, photos, videos, and pretty much anything with a fingerprint. The information is then passed to the program for analysis, verification, and learning from it, usually in the form of pattern recognition and modeling. The machine is taught to look for specific aspects of the dataset and then provides inferences about what it has collected. Over time, the machine will use this data and a set of algorithms to mimic the way humans learn and gradually improve its accuracy.


So, how does machine learning work?

The details and technicalities of machine learning are a huge subject that will take months to write, but here we can get you covered! To help you get the concept to understand, treat the car like your own child. A child growing up under your control and care, observing the world around them with intelligent eyes. To communicate with your child, you must teach them a language that you know, such as Spanish. B. English. , Arabic, or Spanish, whatever language you speak. In machine language, this would translate to machine learning languages like Python, R, or Java, whichever you prefer as a developer.

Now that your child understands the basics and learns the language you use, they can easily communicate with you without having to teach them the same thing over and over. They will start to remember things from previous training and experiences, and they will use the knowledge they have accumulated in their environment, analyze huge amounts of data as sensory input and react to the real world around them based on the knowledge and values, that you convey. Also, machine learning! With machine learning tools like Scikit-Learn, PyTorch, Tensorflow, Google Cloud AutoML, and Weka, among others, you can teach your machines to learn from experience, just like a smart child.


Based on the machine learning process that you implement, i.e. the set of rules you teach your machine to calculate its responses to external stimuli: Machines like your computer can use data analysis operations and problem-solving skills to predict new versions based on historical data. You may be new to terms like big data and machine learning, but we can assure you that you've been using them for years, perhaps without realizing you're using any of the machine learning capabilities. for example recommendation engines, which you must have used at some point in your life.What about spam filters on smart devices that automatically group suspected spam messages?


This represents more uses of machine learning. Read on to learn about other common machine learning applications, such as Cheating, predictive maintenance, and okay, no more spoilers! Warning! Machine learning should not be confused with deep learning. If you google "machine learning vs. deep learning" you will understand that deep learning is an aspect of machine learning. In machine learning, machines learn to think and to trade for themselves based on historical data with minimal human intervention, while deep learning is where machines learn to think using structures modeled after the human brain.


Turing admits that he cannot adequately refute Lady Lovelace's objection. But he proposes a machine that

a) Provides a solution to the imitation game.

b) Addresses Lady Lovelace's objection.

In his opinion, creating a machine that works well in the imitation game is just one program question. He believes that the computational requirements will improve in the future and will not limit the goal. The real challenge will be to write a program that can fool a human interrogator. He proposes returning to the roots of what shapes the intelligence and ability of the human mind: the initial state of mind at birth the upbringing that the child has as a human being receives and the lessons learned Instead of trying to write a program that mimics an adult's brain, let's imitate a child's brain: a notebook in White with just some mechanism inference allowing the machine to learn. The child machine would receive training from a teacher as a normal child would. Turing believes that this formation process would probably take as long as it does for a human. Child. He would be rewarded if he got good results and reacted correctly. They would be punished for disobedience and response. Basically it proposes to implement a machine that can learn. Or slightly rephrased, suggests machine learning.


Or for the real nerds: Reinforcement Learning.

Turing hopes that the teacher does not always know exactly what is going on in the child's mind, just as a teacher cannot always tell what the human student is thinking. And if we can't tell what's going on inside the machine when it's learning, how can we tell it that's exactly what it's doing? So if we could get a machine to learn, as Turing suggests, it would do something we didn't specifically tell it, which would refute Lady Lovelace's argument.


Types of Machine Learning

Machine learning has two main functions: predictive models and older models. Predictive models are used to predict future outcomes using existing and historical data, while sort models help predict the out-of-order position of data items. In general, machine learning can be divided into four broad categories based on the type of algorithms that data scientists use, which in turn depends on the type of outcome or outcome they want to predict.

The four types of machine learning are:

1. Supervised Machine Learning

Think of this type of training as you would give it to children who are helpless on their own. You need to underline and label things so your child understands what you want them to do. You have to lead the way.

In supervised machine learning, the inputs and outputs of each algorithm are determined. Data scientists help algorithms define the variables the algorithm is likely to use with labeled training data.

2. Unsupervised Machine Learning

Now your child is a little older and can do the housework independently. You don't have to label anything for him anymore, he can figure it out for himself. In unsupervised machine learning, data scientists work with unlabeled data and are trained to analyze datasets to see if they can find known patterns to make meaningful connections to ideas. In short, the training data and the output are predetermined.

3. Semi-Supervised Machine Learning

This is a very relaxed parenting technique you can use for your routine kid who is now a teenager! They need a variety of flooring tips, but they also need the freedom to explore on their own. It is a combination of supervised and unsupervised machine learning, where data scientists mainly work with labeled data, but also allows the machine learning model to examine the available data and draw its conclusions.

4. Reinforced Machine Learning

It's like teaching a child to distinguish between prevailing views of right and wrong by using appropriate methods to encourage them. For example, every time it learns a new word, offer it some chocolate as a gift. In reinforcement learning, data scientists train the machine to follow well-defined rules by teaching it positive or negative signals of execution to give a specific task. However, like most children in our lives, the algorithm itself decides which steps to take! This is a trial and error process that will ultimately produce verified results.

Deep Blue

On February 10, 1996, a computer called Deep Blue beat a human in a game of chess. The historical thing about it was the fact that the human was the reigning world chess champion, Gary Kasparov. Although Kasparov eventually defeated the computer, this was the beginning of the end of human dominance in this extremely strategy-based board game. Then, in May 1997, after several updates, the same computer was played with Kasparov again. Time the computer beat Kasparov 2-1 in a 6-game tournament. If there was a debate about how a computer can't beat the best human in a game of chess, that's where it ends. Or is it? Deep Blue is as fascinating in what it isn't as it is in what it is. While Deep Blue's solid knowledge of chess derives from learning previous games, its superpower lies in its tremendous processing ability. Deep Blue is a Massively Parallel System 'technical term for many processors running in parallel', with 30 processors plus 480 chess pieces. Put simply, it's a beast. It can process almost 300 million positions/movements per second. That begs a question: is Deep Blue a learning machine? Firstly, according to the tournament rules, Deep Blue developers were allowed to change/update the software between games. This means Deep Blue engineers learned the game and taught the computer to do it instead of the computer making it. But IBM attributes this to updating the system's learning mechanism. It's okay. Contrast Deep Blue's performance to how a human thinks. A person improves his chess game the more he plays against better opponents. If the IBM engineers were right that their intervention between games was only for improvement, the question remains: Does Deep Blue understand chess and its strategies the way a human learns? I'm not referring to the argument between wetware and hardware/software. Humans do not process more than 200 million movements per second: far from it. Deep Blue, on the other hand, has this ability and uses it very effectively. This points to the difference between human knowledge and the raw power of a computer. Deep Blue may have had the same knowledge as Kasparov, but at the end of the day Deep Blue won.


Significance of Studying AI and Machine Learning

In diverse fields including banking, healthcare and smartphone apps, artificial intelligence (AI) has transformed the way people learn, think and act. Most intriguingly, we don't fully understand the impact of AI on our daily lives. AI i,s already everywhere, from Google to a variety of video games with virtual players and, social media tools. Undoubtedly, this is the most debated topic in today's business world. of the most exciting and sought-after fields of work. Machine learning in the field of artificial intelligence enables computers to learn and grow on their own without the need for direct programming. Students participating in machine learning programs learn to create self-learning computer systems by combining algorithms and models into statistics.


Some of the benefits of this career today are:

1. Artificial intelligence and machine learning are the revolutionary careers of this century

Today's world is data-driven. Humans are not capable of handling such a large amount of data on their own. Even traditional data processing methods are not enough. Today, new methods and cutting-edge science such as machine learning and artificial intelligence are the only ways to process and organize such a large amount of data. Individuals trained in artificial intelligence and machine learning can find many jobs and careers in this field.

2. Careers in AI and machine learning pay off

One of the fastest-growing technologies in today's job market is artificial intelligence. The highest salary level for an AI and ML engineer can reach 50,000 per year. Students have good opportunities to earn a lot in this field. Students who have passed out at the best Al Ml Colleges in Maharashtra have an opportunity to make this kind of money and much more.

3. Machine learning and artificial intelligence are different fields

"Smart" computers use artificial intelligence to think like humans and perform tasks independently. Computer systems develop their intelligence through machine learning. One such method of training computers to mimic human thinking involves the use of neural networks, a set of algorithms modeled after the human brain. Top AI & Machine Learning Universities in Maharashtra offer students various career opportunities.

4. Machine learning and artificial intelligence are the most in-demand skills of the 21st century

It is undoubtedly true that artificial intelligence will eventually replace many workers, but it will also open up countless new job opportunities in related industries. Industry sectors. To stay current, everyone should have at least a basic understanding of AI. It is exciting to be part of this revolutionary change because artificial intelligence is fundamentally changing civilization. It has countless uses, many consider it the skill of the century. A career in artificial intelligence and machine learning is therefore very futuristic.

5. Artificial intelligence and machine learning can process huge amounts of data

Every day a lot of data is generated in bytes. Unexpectedly, we have AI-powered computers and gadgets that can process such a large amount of data. AADHAR Indian Maps facts can be described with very good data. Big data also includes the Facebook and Twitter posts we like, view, retweet and comment on. In addition, AI-powered programs examine trends in this data and take action.

6. Artificial intelligence and machine learning benefit society

Artificial intelligence has a lot to offer this world and is also mainly used in agriculture. We are aware of the difficulties of being a farmer in the modern world. As the water table sinks and the struggle for natural resources intensifies, farmers face new threats every day. A course on AI and machine learning in Nashik can even help you contribute to society. Farm logs, for example, is software that makes farmers' jobs easier by providing them with information about soil, temperature, and fields. It also helps keep up with unpredictable plant growth. As a result, farmers increase their income. More governments are simultaneously integrating artificial intelligence (AI) into their smart city applications, helping them improve city planning, reduce crime and utilize real estate.


Free Will and AI

Artificial intelligence is a giant in the high-tech industry. While technology has always been an important factor in these areas, artificial intelligence is making it the heart of the business. From life-saving medical devices to self-driving cars, AI is being built into nearly every application and device. Artificial intelligence(AI) is said to be the property of the intuition or thinking of machines, such as PCs. Rationally, the AI's basic question is: Could there be? or, as Alan Turing put it, "Can a machine think?

The term "free will" or "autonomous" has gained popularity over the past two centuries as a definitive term for a critical mode of control over one's actions. Human action is the result of development and contributes to human fulfillment. Valuable bots must also have a comparable type selected, and we must plan for that. Some agents have more agency or are autonomous, meaning they must govern themselves or control their affairs more than others. We recognize that we have options and are aware of our choices; both are important, even for bots, and decision-making requires more structure on the part of the specialist than just having options available which is essential for bots. The consciousness of free will is therefore not just an epiphenomenon of a structure that satisfies various needs.6 Addressing the problems of AI in philosophy requires a serious look at autonomous machines and invites a skeptical view. Doubts about the autonomy of the machine must understand that any form of technological machine built, designed and operated by humans depends on them the human equivalent in a way that radically limits its possibilities for autonomy and freedom sins in almost the most trivial sense. I shall deal with Lady Lovelace's objection, mentioned in Turing's famous discussion of thinking machines of 1950, as a strong and reasoned statement of skepticism about machine autonomy (MAS). I contend that Lady Lovelace's objection is best viewed as an explanation of the Dual Nature Artifact Theory (DNTA). Consequently, a rejection of DNTA leads to a rejection of MAS. In the first section of the article, I present some recent attempts to examine the problem of autonomy as it emerges in the robotics literature, and I will argue that the action objection to Lady Lovelace's theory of the artifacts that support Lady Lovelace's objection. For the second section, I will argue that Lady Lovelace's reading of the objection, treating machine autonomy as a question of epistemology, also falls short of the force of her radically skeptical argument. I then argue that Turing's original response to Lovelace's objection can best be understood as an argument against DNAT and provides an answer particularly suited to the skeptic. In my understanding, Turing's positive description of "thinking machines" creates a framework for thinking about autonomous machines that lead to practical and empirical methods. for discussing the machines that occupy our technological world. I come from a statistic indicating that computing growth was centered on "machine learning", justified by Turing's account of the social integration of machines and humans.


Do Humans need AI

Is AI necessary in human society? It depends. If people choose a faster, more efficient way to get their work done and work constantly without a break, yes. However, if humanity is content with a natural way of life without the undue desire to conquer the order of nature, it is not. History tells us that people are always looking for something faster, easier, more effective, and more convenient to do the task they are working on; Therefore, the pressure to evolve motivates humanity to search for new and better ways. People like Homo sapiens discovered that tools can alleviate many difficulties in daily life, and through the tools they invented, people were able to get the job done better, faster, smarter, and more effectively. The invention of creating something new becomes what drives people. Progress. Today, thanks to the contribution of technology, we enjoy a much simpler and calmer life. Human society has been using tools since the dawn of civilization, and human progress depends on them. Mankind lives in the 21st century. They didn't have to work as hard as their ancestors did in earlier times because they have new machines working for them. All is well and should be well for this AI, but a warning came in the early 20th century as human technology advanced, Aldous Huxley in his book Brave New World warned that humans could enter a world within the world, they are create a monster or superman with the development of gene technology. Besides, the updated AI also disrupts the healthcare industry by helping doctors to diagnose, find the causes of diseases and suggest various forms of treatment, and surgeries and also to predict whether the disease is life-threatening. A recent study by surgeons at Children's National Medical Center in Washington successfully demonstrated surgery using an autonomous robot. The team supervised the robot to perform soft tissue surgery and sew up a pig intestine, and the robot finished the job better than a human surgeon, the team explained. Demonstrates that robotic surgery can overcome the limitations of minimally invasive pre-existing intensive surgical procedures and improve the skills of surgeons performing open surgery. Above all, we see the high-profile examples of AI, including autonomous vehicles (like drones and driverless cars), medical diagnostics, art-making, games (like chess or Go), search engines (like Google Search), online -Assistants (e.g. Siri), image recognition in photos, spam filtering, flight delay prediction, etc. Granted. AI has become indispensable, although without it it is not necessary, our world today will descend into chaos in many ways.


The Impact of AI on Human Society

Negative Impact

Questions have been asked: As AI continues to evolve, human labor will no longer be necessary as everything can be done mechanically. The process of evolution takes eons to unfold, so we won't notice humanity's regression. However, what if the AI becomes so powerful that it can be programmed to call the shots and disobey the orders of its master, the? Humanity? Let's look at the negative impact AI will have on human society. A major social shift that will change the nature and way we live in the world and disrupts the human community will arise. Humanity has to work hard to make a living, but with the help of AI, we can program the machine to do something for us without even picking up a tool. Human closeness will gradually decrease as AI replaces the need for people to meet face-to-face to share ideas. Artificial intelligence will come between people as face-to-face meetings for communication will no longer be necessary. The lines were filled with machines and robots, forcing traditional workers to lose their jobs. Even supermarkets will no longer need clerks as digital devices can take over human labor will create wealth inequality as AI investors will take most of the revenue. The gap between rich and poor is widening. The so-called “M” shaped wealth distribution becomes clearer. trained and learned to perform the given task, there may eventually come a stage where the human is no longer in control, creating unforeseen problems and consequences. It refers to the AI's ability, after being loaded with all the necessary algorithms, to be able to run automatically on its course, ignoring the human controller's commandHuman masters who create the AI, can invent something that has racial prejudice or self-centered orientation to harm certain people or things. For example, the United Nations has voted to limit the spread of core power for fear of its indiscriminate use to destroy humanity or target specific races or regions to achieve the goal of dominance. The AI may target certain races or some programmed objects to achieve the programmers' order of destruction and thus cause a world catastrophe.


Positive Impact

However, there are also many positive human implications, particularly in the area of health care. AI gives computers the ability to learn, reason, and apply logic. Scientists, medical researchers, physicians, mathematicians, and engineers can work together to develop AI intended for medical diagnosis and treatment, thereby providing reliable and safe healthcare systems. As health professors and medical researchers strive to find new and efficient ways to treat diseases, not only can digital computers help with the analysis, but robotic systems can also be engineered to perform some delicate medical procedures with precision. Here we see AI's contribution to healthcare. For fast and accurate diagnoses, IBM's Watson computer was used for diagnosis with an intriguing result. By uploading the data to the computer, you will get the AI diagnosis immediately. AI can also offer different forms of treatment for clinicians to consider. The procedure goes something like this: upload the digital results of the physical examination to the computer, which will examine possibilities and automatically diagnose whether or not the patient suffers from deficiencies and diseases, even suggesting various types of treatment available. Social therapeutic RobotsPets were recommended to seniors to reduce stress and blood pressure, anxiety and loneliness and improve social interaction. Now it has been suggested that cyborgs accompany such lonely elderly people, even to help with some household chores. Therapeutic robots and social welfare robotic technology contribute to improving the quality of life of the elderly and people with physical disabilities. Reduces errors related to human fatigueHuman errors in the workforce are inevitable and often costly, the higher the fatigue, the greater the risk of error. However, not all technology suffers from fatigue or emotional distraction. It prevents mistakes and can perform tasks faster and more accurately. Artificial intelligence-based surgical postAI-based surgical procedures are available for people to choose from although this AI still needs to be operated by medical professionals, it can get the job done with less harm to the body. The da Vinci Surgical System, a robotic technology that allows surgeons to perform minimally invasive procedures, is now available in most hospitals. These systems allow for a much higher level of precision and accuracy than manually performed procedures. The less invasive the surgery, the less trauma occurs and the less blood loss, the less scared the patients are. Improved radiology first CT scanners were introduced in 1971. The first magnetic resonance imaging (MRI) of the human body was performed in 1977. In the early 2000s, cardiac MRI, body MRI, and fetal imaging became routine. The search for new algorithms to detect specific diseases and analyze scan results continues. All of this is the contribution of AI technology.

The Challenges of AI to Bioethics

Bioethics is a discipline that focuses on the relationship between living beings. Bioethics emphasizes the good and the right in biospheres and can be divided into at least three areas: health bioethics, i.e. the relationship between doctors and patients, social bioethics, i.e. the relationship between humanity and environmental bioethics is the relationship between humans and Nature, including animal ethics, land ethics, ecological ethics, etc. Stocks. With the advent of AI, humans face a new challenge when it comes to relating to something that is not in itself natural. Bioethics usually analyzes the relationship within natural existences, be it humans or their environment, which are part of natural phenomena. But now humans have to deal with something man-made, artificial and unnatural, namely AI. Humans have created many things, but they have never had to think about how to deal ethically with their creations.

The AI itself is devoid of feelings or personality. AI engineers have realized the importance of giving the AI the ability to detect to prevent any sneaky activity that causes unwanted damage. From this perspective, we understand that AI can have a negative impact on people and society; Therefore, the bioethics of AI becomes important to ensure that AI does not deviate from its originally intended purpose. Stephen Hawking warned in early 2014 that the development of full AI could mean the end of mankind.


He said once humans develop AI, they can take off on their own and redesign themselves at an ever-increasing rate. installment. Humans, constrained by slow biological evolution, could not compete and would be replaced. In his book Superintelligence, Nick Bostrom argues that AI will pose a threat to humanity. convergent behavior, such as resource gathering or protection from a shutdown, could harm humanity.


The question is: do we have to think about bioethics for a man-made product that has no vitality? Can a machine have a mind, consciousness, and mental state in the same sense as humans? Can a machine be sensitive and therefore deserve certain rights? Can a machine intentionally cause harm? The regulations should be seen as a bioethical mandate for the production of AI.

• AI must not trample on human autonomy. Humans must not be manipulated or coerced by AI systems, and humans must be able to intervene or oversee any decisions made by the software.

• AI must be safe and accurate. It must not be easily compromised by external attacks and must be reasonably reliable.

• Personal data collected by AI systems must be secure and private. It should not be accessible to anyone. And should not be easily stolen.

• The data and the algorithms used to create an AI system must be accessible, and the decisions made by the software must be “understood and tracked by human will". In other words, operators should be able to explain the decisions their AI systems are making.

• Services provided by AI should be available to everyone, regardless of age, gender, race, or any other characteristic.

• Similarly, systems should not be biased in this regard AI systems should be sustainable (i.e. they should be environmentally responsible) and "encourage positive social change"AI systems must be auditable and be covered by existing corporate whistleblower protections. Negative system impacts must be identified and reported in advance.


From these guidelines, we can suggest that future AI should be endowed with human sensibility or "AI Humanities".“ To achieve this, AI researchers, manufacturers, and all industries must remember that technology is there to serve, not to manipulate people and their society. Bostrom and Judkowsky named responsibility, transparency, verifiability, incorruptibility, and predictability as criteria for reflection on the computerized society.

Suggested Principle for AI Bioethics

Nathan Strout, a reporter for the Space and Intelligence System at Easter University, USA, recently reported that intelligence agencies are developing their own AI ethics. The Pentagon announced in February 2020 that it is in the process of adopting Principles for the Use of AI as guidelines for the department to follow when developing new AI tools and AI-enabled technologies. Ben Huebner, head of the Office of the Director of Civil Liberties, Privacy and Transparency.

The National Intelligence Office, said we need to make sure we have transparency and accountability in these structures when we use them. They must be safe and resilient”. Two topics were suggested that the AI community should think about further: explainability and interpretability.

Explainability is the concept of understanding how the analysis works, while interpretability is being able to understand a specific result of an analysis. All principles proposed by AI bioethics scholars are well put. I bring together various bioethical principles from all related fields of bioethics to propose here four principles that should be considered to guide the future development of AI technology. After all, it was designed and made by humans. AI continues its work according to its algorithm.


AI itself lacks empathy or the ability to tell right from wrong and can make process errors. The entire ethical quality of AI depends on the human designers; hence it is a bioethics of the AI and at the same time trans bioethics that shortens the human and the material world. Here are the principles:


1. Charity: Charity means doing good, and here it refers to the purpose and functions of AI that should benefit all human life, society and the universe. Any AI doing destructive work in the universe, including all life forms, should be avoided and banned. It has no purpose other than to serve human society as a whole, not individual personal gain. It must be altruistic, not egocentric.

2. Defense of Values: This refers to the AI's congruence with social values, in other words, the universal values that govern the ordering of the natural world that must be respected. rise above social and moral norms and be free from prejudice. Scientific and technological developments should serve to improve human well-being, which is the main value that AI should uphold as it advances. It must be easy to understand, find, incorruptible and noticeable.

3. Lucidity: AI technology must be available for public audit, testing, and review, and subject to accountability. In situations such as diagnosing cancer from X-rays, an algorithm that cannot explain its work can pose an unacceptable risk. Explainability and interpretability are therefore necessary.


Conclusion

What Turing proposed as an imitation game has somehow become a practical approach to measuring intelligence. Turing believed that in the late 20th century nobody would dispute that machines think. We're not there yet. No machine has officially passed the Turing test. Eugene Goldman, an AI posing as a Russian boy, fooled the judges by claiming his poor communication skills were due to his poor knowledge of English. But according to the experts, it was all about cheating the game.

As Turing said, winning in a fake game is a matter of good programming. And the AI world hasn't focused on building the right program to pass the Turing test. The aim is to solve real problems. We have cancer-detecting computers, personal digital assistants, and robots to improve the social skills of autistic people. Children and even sex robots.


Turing asked if there was any conceivable machine that would work well in the imitation game. We're not there yet. However, given advances in artificial intelligence, most people would be able to envision such a machine.


Machine Computing and Intelligence laid the groundwork for what we now call artificial intelligence. This is another example of Turing's genius.


After learning that machine learning is a process that belongs to the realm of artificial intelligence and examining some of the ways we view these technologies in the world around us, we can confidently say that AI is the technology of the future. There are also many benefits arising from the rapid adoption of artificial intelligence. If AI is designed and implemented to minimize the risks of the technology, we can imagine living in a much more efficient society.


Today, AI and machine learning are booming, and many colleges are offering bachelor's, master's, and even doctoral degrees. After earning these degrees, students can work for multinational companies and government institutions, or even start their businesses. So it's a futuristic, high-tech career for today's generation.


Data and statistics (also known as deep data analysis) move the world when it comes to business. And let's face it, machine learning is like stats on steroids! Whether it's fraud detection, recommendation engines, image recognition technology, or a million other artificial intelligence (AI) applications, one thing is clear to anyone who takes the time to research. Machine learning, powered by Big Data, is the future of technology as we know it. Not only does it make our lives easier, but it also improves the way we see the world around us see and make better decisions.


Free time to bake a cake and eat it, thanks to machine learning that lightens the load of your mundane and manual activities.


Gurugrah
 

By Harshit Mishra | March 16, 2022, | Writer at Gurugrah_Blogs.

 

Related Posts :

bottom of page