Skip to main content

Top Myths of Artificial intelligence

Top Myths of Artificial intelligence

WHY IS AI STUDY

Soon, the goal of maintaining the impact of AI on a profitable society encourages research in a wide range of areas, from economic and legal to technical topics such as validation, legitimacy, security, and control. While it may be a small inconvenience if your laptop crashes or crashes, the AI ​​system must do what you want it to do when it controls your car, your plane, your pacemaker, your automated trading system, or your power grid. Another short-term challenge is to prevent the destructive arms race from independent sovereign weapons.

In the long run, the key question is what will happen if the search for solid AI is successful and the AI ​​system is better than humans in all cognitive functions. As pointed out by I.J. Well in 1965, designing intelligent AI programs itself is a work of understanding. Such a system could replicate itself, causing an explosion of intelligence leaving human ingenuity far behind. By building new transformational technologies, such intelligence can help us to eradicate war, disease, and poverty, so the creation of powerful AI could be the greatest event in human history. Some experts have expressed concern, however, that it may also be the last resort unless we learn to align AI objectives with our own before they become intellectuals.

Some question whether solid AI will ever be achieved, while others insist that the creation of highly intelligent AI is guaranteed to help. At FLI we recognize both of these opportunities, but we also recognize that the implant system can cause significant or unintentional damage. We believe that today's research will help us better prepare for and prevent such future consequences, thus enjoying the benefits of AI while avoiding pitfalls.

HOW CAN AI BE DANGEROUS

Most of the researchers agree that a superintelligent AI is unlikely to capture human emotion, such as love and hate, and there's no reason to expect AI to be intentional or malicious. Instead, when considering how AI might be a risk, experts believe that the two scenarios are the most likely to be.

The AI is programmed to do something devastating: Autonomous weapons are artificial intelligence systems that are programmed to kill. In the wrong hands, these weapons could easily cause mass casualties. In addition, it is an AI arms race could inadvertently lead to an AI war that also will lead to mass casualties. To avoid being thwarted by the enemy, and it is this weapon that is designed to make it extremely difficult to simply "turn on" so that people have a reliable means you can lose control of the situation. It is a risk even with narrow AI, but it will increase as the AI is the next level of intelligence and autonomy increase. 

An AI that is programmed to do so to do something useful, but it is developing a disruptive manner, to achieve its goal: this can happen if we are not able to be fully in line with the AI of its goals with those of us, which has been surprisingly difficult. If you have any questions about Instagram ownership of a car, it will take you to the airport as quickly as you can, being chased by helicopters and covered in vomit, don't do what you want it to, and it's literally what you asked for. If a superintelligent system is an ambitious geoengineering project, it is harmful to the ecosystem, as well as one of the side-effects and treatment of human attempts to stop it as a threat that needs to be removed. 

As these examples show, to care for, the advanced AI, is not anger, but it is about creativity. Be a super-intelligent AI will be extremely good at attaining its goals, and if those goals are not the same as those of us, we have a problem. You're probably not an evil ant-hater who steps on ants out of malice, but if you're in charge of, green energy, hydro-electric project, and there's an anthill in the region to be flooded, it is very, very bad for the ants. A major goal of AI in security research is and the man is in the position of those ants.

WHY THE LATEST INTERESTS OF AI SECURITY

Stephen Hawking, Elon Musk, Steve Wozniak, Bill Gates, and other major names in science and technology have recently expressed concern to the media and open literature about the dangers posed by AI, compiled by many leading AI researchers. Why is this story in the news?

The idea that the search for solid AI would ultimately be successful has long been regarded as science fiction, for centuries or more. However, thanks to recent advances, many stages of AI, which experts consider decades ago, have now been achieved, leading many experts to take seriously the possibility of greater ingenuity in our lifetime. While some experts speculate that the level of human-AI is still hundreds of years old, most AI researchers at the 2015 Puerto Rico Conference speculated that it may be before 2060.

Because AI has more intelligence than any other person, we have no definite way of predicting how it will behave. We cannot use past technological advances as a basis because we have never created anything capable, knowingly or unknowingly, beyond us. The best example of what we might face would be evolution. Humans now control the planet, not because we have the power, the speed, or the power, but because we are truly intelligent. If we no longer have the wisest, are we guaranteed to rule?

FLI's position is that our civilization will prosper as long as we win the race between the growing technological power and the ingenuity with which we operate. In the case of AI technology, the FLI's position is that the best way to win that race is not to disrupt the first but to accelerate the last, by supporting AI safety research.

HIGHER MYTHS WITH AHEAD AI

An interesting discussion takes place about the future of artificial intelligence and what it will mean to humanity. There are some interesting controversies where the world's leading experts disagree, such as The future impact of AI on the job market; if / when human-level AI will be developed; whether this will lead to an explosion of intelligence; and that this is something we must accept or fear. But there are many examples of frustrating false arguments caused by past misunderstandings and communication. To help us focus on exciting arguments and open-ended questions - not on the wrong understanding - let's clarify some of the most common myths.

THE MYTHS OF THE TIMES

The first myth looks at the timeline: how long will it take until the machines far surpass human-level intelligence? A common misconception is that we know the answer with absolute certainty.

Another popular myth is that we know we will find more AI than human power in this century. History is littered with so many technologies. Where are the electrical outlets and the flying cars that we were promised we would have now? AI has been pressed many times before, even by some of the founders of the field. For example, John McCarthy (who invented the term "artificial intelligence"), Marvin Minsky, Nathaniel Rochester, and Claude Shannon wrote the most optimistic prediction of what can be achieved within two months with stone computers: in the summer of 1956 at Dartmouth College. An effort will be made to find a way to make machines use language, create mysteries and concepts, solve the kinds of problems currently facing people, and improve them. We think that much progress could be made in one or more of these problems if a carefully chosen team of scientists worked together in the summer. ”

On the other hand, the popular contradiction is that we know we will not find superhuman AI in this century. Researchers have made various estimates of how far we have come from supernatural AI, but we certainly cannot say with great confidence that the potential for zero in this century, given the poor record of such techno-skeptic indicators. For example, Ernest Rutherford, undoubtedly the greatest nuclear scientist of his time, stated that in 1933 - less than 24 hours before Silzard instituted a nuclear reactor - that nuclear power was "the light of the moon." And astronomer Richard Richard Woolley called the bilge trip in 1956. The worst part of this myth is that superhuman AI will never come because it is physically impossible. However, scientists do know that the brain contains a quark and an electron that is programmed to act as a powerful computer and that no law of physics prevents us from creating highly intelligent quark blobs.

There have been several studies asking AI researchers how many years from now they think we will have at least 50% human AI. All of these surveys have the same conclusion: the world's leading experts do not agree, so we just do not know. For example, in such a survey of AI researchers at the 2015 Puerto Rico AI conference, the median (medium) response was in 2045, but some researchers speculated for centuries or more.

There is also a related myth that people who are concerned about AI think that there are only a few years left. Most people in the record worrying about superhuman AI guess that it is still at least a decade away. But they argue that as long as we are not 100% sure that it will not happen in this century, it is wise to start a security study now to prepare for what will happen in the end. Many of the security problems associated with human-based AI are so complex that it can take decades to solve them. It is therefore wise to start researching them now rather than the night before some of the program's Red Bull drinkers decided to open one.

CONTROVERSIAL MYTHS 

Another misconception is that the only people who work in the port of concerns about AI and support for AI research are Luddites who don't know much about AI. When Stuart Russell, author of the standard AI textbook, stated this during a speech in Puerto Rico, as well as the audience laughed out loud. A related problem is that supporting AI safety research is very controversial at the time. In fact, for the support of a modest investment in AI research, people don't need to ensure that the risks are high, but not too small, just as a modest investment in home insurance is justified by the negligible probability that a house has burned to the ground. 

Perhaps, though, the media, the AI in the security debate is more controversial than it is. After all, fear sells, and the items that they are using out-of-context quotes to be replaced by the following impending death can get you more clicks than a nuanced, well-balanced one. As a result, the two people who know of each other's position, and the quotations from the media, might be thinking that they don't even have more than they do. For example, a technocrat only read about Bill Gates ' position, in a British tabloid may think, mistakenly, that is, the Ports believe superintelligence is inevitable. Also, someone on the positive, the AI movement, who doesn't know anything about it, Andrew Ng, the situation is the different quote about overpopulation on Mars, it may often mistakenly think that they do not care about the safety and security of AI when in reality they are doing. The point here is simply that, since Ng is the time estimated to be longer, it is the tendency to favor short-term and SI problems in the long-term ones.

MYTHS ABOUT THE DANGERS OF SUPERHUMAN AI

Many AI researchers opened their eyes when they saw the article: "Stephen Hawking warns that the rise of robots could be catastrophic for humanity." And as many have lost count of how many similar articles they have seen. In general, these articles are accompanied by an ugly-looking robot that holds a weapon and suggest that we should be concerned about robots getting up and killing us because they are alert and/or evil. In a simple book, such essays are, in fact, impressive, because they summarize the situation in which AI researchers are not concerned. That situation involves three distinct misconceptions: cognitive impairment, evil, and robots.

When driving down the road, you have a subdued experience of colors, sounds, etc. But do self-driving cars have a secret experience? Does it sound like a car to drive? While this mystery of knowledge is interesting in itself, it does not apply to the risk of AI. If you are hit by a car without a driver, it does not make any difference to you whether it feels excessive. In the same way, what will affect us humans is what the most intelligent AI does, not how it feels too much.

Fear of machines that turn bad is another red herring. The real concern is not violence, but power. AI with clever interpretation is very good at achieving its goals, whatever it is, so we need to make sure that its objectives are in line with ours. Humans generally do not hate ants, but we are much wiser than they are - so if we want to build an electric dam and there are ants there, it is very bad for ants. Profitable movement AI seeks to avoid putting humanity in the position of those ants.

The misconception is related to the myth that machines cannot have purposes. Machines obviously can have intentions in the narrow sense of principle-centered behavior: the behavior of a hot-shot arrow is more economically defined as a target. If you feel threatened by a machine with a goal that is not right with yours, it means that it is its intentions in this small sense that are troubling you, not that the machine recognizes and meets the purpose. If that arrow wanted to burn the chase, you wouldn't exclaim: "I'm not worried, because the machines can't have purposes!"

I sympathize with Rodney Brooks and the other robotic pioneers who feel demon-possessed by the terrifying tabloids, as some journalists seem to be overly optimistic about robots and adorn their many stories with ugly metal monsters with shiny red eyes. The main concern of AI's beneficial movements is not about robots but about ingenuity itself: in particular, intelligence with their own goals that are not in line with ours. To create a problem for us, such misused intelligence that requires human energy does not require a robotic body, only an internet connection - this could allow the best financial markets, establish human researchers, exploit human leaders, and create weapons we cannot understand. Even if building robots were physically impossible, the most intelligent and rich AI could easily pay or control most people to do its will without knowing it.

The erroneous notion of robots is related to the myth that machines cannot control humans. Intelligence builds control: people control tigers not because we have power, but because we are smart. This means that if we block our position as the smartest in our world, we too may be blocking control.

INTEREST DISPUTES 

No time to waste on the misconceptions that have been listed above, we will be focusing on the real and interesting arguments that even the experts do not agree with this. What do you want to be in the future? Do we need to develop lethal autonomous weapons? What would you like to do with the automated work? What career advice would you give to today's students? Do you want to get a new job in the old, unemployed society, where everyone can enjoy a life full of entertainment and is machine-made wealth? If you want to create a superintelligent living and spread across the whole of the cosmos? We will be in control of the smart machine, or will they check? The intelligent machines replace us, our, or us in? What does it mean to be human in the age of artificial intelligence? What would you like to do, and what we can do for the future, right? Join the conversation!


😊






Comments

Popular posts from this blog

Bitcoin: The Future Currency

BITCOIN THE FUTURE CURRENCY What is Bitcoin? Bitcoin is a new currency made in 2009 by an anonymous user using the name Satoshi Nakamoto. Transactions are made without middlemen - that is, there are no banks! Bitcoin can be used to book Expedia hotels, buy furniture at Overstock and buy Xbox games. But much of the hype is about getting rich by trading it. The price of bitcoin has risen to thousands in 2017. Bitcoin is a form of cryptocurrency. There are no portable bitcoins, only the ratings kept on the public ledger everyone has open access. All bitcoin transactions are guaranteed by a large amount of computer power. Bitcoins are not issued or sponsored by any banks or governments, and individual bitcoins are not as valuable as assets. Although not a legal tender in many parts of the world, Bitcoin is very popular and has led to the launch of hundreds of other cryptocurrencies, collectively called altcoins. Bitcoin is often abbreviated as "BTC." Summary Launched

World Get Fifth Ocean Soon: An Atlas Near You

World Get Fifth Ocean Soon: An Atlas Near You Most of us study the oceans of the world in elementary school. There is the Pacific, Atlantic, India and Arctic. Now, there is a change at sea ahead. Thanks to National Geographic, you will soon see the fifth ocean in your maps. It now officially recognizes the South Sea, the waters around Antarctica, marking the first time that the organization has made such a change since it began mapping over a century ago. On World Oceans Day earlier this week, National Geographic announced the difference, which many scientists and researchers have illegally acknowledged for decades. "Traditionally, there have been four [oceans] defined primarily by the earth's population," Alex Tait, a geographer of the National Geographic Society, told NPR's All Things Considered. "We think it's important to add this fifth coastal region because it's unique and because we want to bring care to all areas of the ocean." National Geogr

Vegan & Veganism vs vegetarian and non vegetarian

Vegan vegetarian non vegetarian Vegetarian vegetarians are people who do not eat products or products produced for the killing of animals. People who eat vegetables do not eat meat, such as beef, pork such as chicken, turkey and duck, fish and insect repellents and other animal protein and oil obtained from animal slaughter. However, many vegetarians eat products that do not involve the killing of animals. These include eggs, dairy products and yoghurt, honey. Vegetarians eat fruits, vegetables, nuts, seeds, grains, and legumes, as well as "meat areas" found in these foods. What is veganism? Veganism is a strong way to eat vegetables. Vegetables avoid eating or using any animal products or products. The Vegan Society defines veganism as "a way of life, seeking to eliminate, as far as possible, all forms of exploitation and abuse of animals for food, clothing or other purposes." Veganism strictly avoid eating any foods or beverages that contain meat, poultry, fish, e