You are currently viewing Drones take over the WORLD with Reinforcement Learning

Drones take over the WORLD with Reinforcement Learning

If you haven’t already heard about the latest news in drone technology, you should know that reinforcement learning algorithms are making drones smarter than ever before. Developed by Greg Brockman, the CEO of OpenAI (a nonprofit AI research company), this artificial intelligence technology has shown impressive results as it improves drones’ ability to fly more consistently and aggressively with less human interference and supervision. And at the same time, reinforcement learning also enables drones to collect more data using sensors while saving their batteries so they can stay airborne longer — all with minimal supervision.

Reinforcement Learning – an Overview


The core principle of RL is that an agent can be trained to perform a task by providing feedback on its success or failure. The agent learns from experience and will repeat actions which lead to favorable outcomes, while avoiding actions that lead to unfavorable outcomes. In addition, it will try different strategies until it finds one that works well. This is analogous to trial-and-error, but without any wasted effort: a single attempt at an action will either succeed or fail completely – there’s no time spent doing anything other than trying (unsuccessfully) to do something.

It is a formal kind of science that studies the problem of learning by trial and error, for example- It is the way we learn riding a bicycle.

Anonymous

Reinforcement Learning Applied to UAV Drone Technology


Recently, a brand new field of computer science has been developed and applied to drone technology. This field is called reinforcement learning and it is used to get a robot or drone do certain tasks by rewarding certain actions that have taken place. A team of researchers have just released their first case study applying RL to drone technology. Using reinforcement learning, robots can fly through hoops, explore unknown environments and move around obstacles. Eventually, drones could perform some very complicated tasks such as delivering packages in cities or working on construction sites.

Currently, drone technology is limited by its navigation systems. Many drones are able to follow pre-programmed routes through known environments and they can also hover in one spot while taking pictures or videos of a certain area. However, creating more advanced features will require new algorithms and methods of control that allow them to learn tasks and perform these tasks without human assistance. In order to achieve these advanced capabilities, researchers turned to a field called deep reinforcement learning (DRL). This subfield of machine learning uses deep neural networks trained by trial-and-error in simulation environments. Through trial-and-error, a simulated drone can figure out how to fly through hoops or explore an unknown environment just like robots do in sci-fi movies.

While applications of DRL are still in their infancy, a team of researchers from OpenAI and MIT has just published their first successful case study using deep reinforcement learning on UAVs. Using DeepMind’s DQN algorithm, they were able to get a simulated drone to navigate around obstacles. This proof-of-concept experiment shows that drones can be used for many different applications such as delivering packages or working on construction sites where it is unsafe for humans to operate. The technology could be used to create flying search and rescue robots which could easily navigate tight environments or aid people trapped in buildings that have been destroyed by an earthquake or a bombing.

IRL (In Real Life) Use Cases of RL with UAVs

  • Reinforcement Learning is used in numerous projects to make sense of data coming from Sensors, Cameras and other sensors. How can IRL use cases help us understand its application in context of UAVs? Read on to find out.
  • The NASA Jet Propulsion Laboratory (JPL) is working on a drone project that’s being supported by a U.S. Office of Naval Research contract. The goal of JPL’s Open Source Robotics (OSR) project is to build drones that help people on Earth by flying where no drones have gone before and doing it autonomously, or without human control.
  • The Open Source Robotics Foundation, a nonprofit corporation based in Virginia, is working on several UAV projects that involve RL. The foundation’s Community Labs program allows members to participate in open source robotic projects of their choosing and offers access to various tools needed for building robots. The organization also provides education materials that can help people learn how to use these tools and how to develop algorithms needed for programming autonomous drones.
  • The Defense Advanced Research Projects Agency (DARPA) has funded numerous projects that involve RL and UAVs. One of its more recent efforts, called Collaborative Operations in Denied Environment (CODE), seeks to develop autonomous unmanned aerial vehicles that can support U.S. military personnel by providing critical real-time intelligence. According to DARPA, CODE will develop virtual reality technology and computer vision algorithms that would allow drones to navigate in areas where GPS is not available or reliable.
  • The U.S. Air Force has also been funding RL research in connection with its Unmanned Aircraft Systems (UAS) programs. For example, its Adaptive Flight Control Technology (AFC Tech) program aims to improve and expand state-of-the-art autonomous flight control systems for UAVs. It is one of several recent Air Force research initiatives focusing on Autonomy Technology Development efforts.
  • Another of these efforts, called Optimal Decentralized Search Control (ODS), focuses on developing new technology for UAVs that can allow a single operator to manage multiple aircraft and direct them toward different areas of interest. As explained in an Air Force news release, ODS would enable a single UAV operator to oversee dozens or even hundreds of drones.
  • The Air Force Research Laboratory’s Self-Adaptive and Self-Organizing Systems (SASS) program also aims to develop autonomy technology for UAVs. As explained in an Air Force news release, SASS has taken steps toward creating highly collaborative teams of autonomous systems that can make faster, better decisions than their human counterparts.
  • In addition to such government-funded programs, a number of organizations have formed around RL research and development. One example is Open AI, a nonprofit corporation based in California that aims to develop artificial general intelligence (AGI) that can be used for multiple applications, including commercial drones. As described on its website, Open AI was founded by Sam Altman and Elon Musk after they noticed how machine learning systems were poorly explained or understood by most people and wanted to do something about it.

Conclusion


Reinforcement Learning based Drone for Physician Assistant : Understanding of Reinforcement Learning techniques in decision making, exploration and exploitation is now understood well. Current researchers have proposed various methods to solve RL problems including: Q-learning, SARSA, Double Q-learning etc. In some real life cases it is difficult to obtain full observation of all features. This makes Reinforcement Learning algorithms hard to apply on real life cases. A new model called UAV has been introduced in which a RP agent performs both action selection and value function estimation using only partial information (the observations) from environment in a multiagent setting where agents may play roles such as predator, prey and path finder.

Training UAV agent in a multiagent scenario. This is basically used to train different agents simultaneously in a way to make things easier by sharing information and at same time do individual training. Train multiple agents with identical methods and then learn parameters of each agent is much simpler than training each agent independently. Utilizing simulation technology, it was shown that reinforcement learning paradigm can be useful to develop flying drones in aerial environment where there is presence of other drone as well as obstacles.

In future we will use these RL based drones for emergency services, disaster management and drug delivery during surgery by AI doctors making them smarter just like a human brain.

This Post Has One Comment

Leave a Reply