Space-Based Sensor Tasking Using Deep Reinforcement Learning

نویسندگان

چکیده

Abstract To maintain a robust catalog of resident space objects (RSOs), situational awareness (SSA) mission operators depend on ground- and space-based sensors to repeatedly detect, characterize, track in orbit. Although some are capable monitoring large swaths the sky with wide fields view (FOVs), others—such as maneuverable optical telescopes, narrow-band imaging radars, or satellite laser-ranging systems—are restricted relatively narrow FOVs must slew at finite rate from object during observation. Since there many that FOV sensor could choose observe within its field regard (FOR), it schedule pointing direction duration using algorithm. This combinatorial optimization problem is known sensor-tasking problem. In this paper, we developed deep reinforcement learning agent task narrow-FOV low Earth orbit (LEO) proximal policy The sensor’s performance—both singular acting alone, but also complement network taskable, ground-based sensors—is compared greedy scheduler across several figures merit, including cumulative number RSOs observed mean trace covariance matrix all observable scenario. results simulations presented discussed. Additionally, an LEO SSA different orbits evaluated discussed, well various combinations sensors.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Dynamic Sensor Tasking for Space Situational Awareness via Reinforcement Learning

This paper studies the Sensor Management (SM) problem for optical Space Object (SO) tracking. The tasking problem is formulated as a Markov Decision Process (MDP) and solved using Reinforcement Learning (RL). The RL problem is solved using the actor-critic policy gradient approach. The actor provides a policy which is random over actions and given by a parametric probability density function (p...

متن کامل

Operation Scheduling of MGs Based on Deep Reinforcement Learning Algorithm

: In this paper, the operation scheduling of Microgrids (MGs), including Distributed Energy Resources (DERs) and Energy Storage Systems (ESSs), is proposed using a Deep Reinforcement Learning (DRL) based approach. Due to the dynamic characteristic of the problem, it firstly is formulated as a Markov Decision Process (MDP). Next, Deep Deterministic Policy Gradient (DDPG) algorithm is presented t...

متن کامل

Space-Based Antenna Morphing using Reinforcement Learning

Shape Memory Alloys (SMA’s) have been employed to enhance structural properties and increase the ability of structures to adapt and conform as desired. Morphing technology has also proven beneficial to space hardware deployment, in addition to satellite antenna design. In this research, Reinforcement Learning is utilized with an antenna model to demonstrate that antenna elements equipped with S...

متن کامل

Deep Reinforcement Learning in Parameterized Action Space

Recent work has shown that deep neural networks are capable of approximating both value functions and policies in reinforcement learning domains featuring continuous state and action spaces. However, to the best of our knowledge no previous work has succeeded at using deep neural networks in structured (parameterized) continuous action spaces. To fill this gap, this paper focuses on learning wi...

متن کامل

Vision-based Deep Reinforcement Learning

Recently, Google Deepmind showcased how Deep learning can be used in conjunction with existing Reinforcement Learning (RL) techniques to play Atari games[11], beat a world-class player [14] in the game of Go and solve complicated riddles [3]. Deep learning has been shown to be successful in extracting useful, nonlinear features from high-dimensional media such as images, text, video and audio [...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Journal of The Astronautical Sciences

سال: 2022

ISSN: ['2195-0571', '0021-9142']

DOI: https://doi.org/10.1007/s40295-022-00354-8