Metadata-Version: 2.1
Name: UtilsRL
Version: 0.4.5
Summary: A python module desgined for RL logging, monitoring and experiments managing. 
Home-page: https://github.com/typoverflow/UtilsRL
Author: typoverflow
Author-email: typoverflow@outlook.com
License: MIT
Platform: UNKNOWN
Requires-Python: >=3.7
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: gym (>=0.19)
Requires-Dist: tqdm
Requires-Dist: numpy
Requires-Dist: tensorboard
Requires-Dist: torch
Requires-Dist: pandas

# UtilsRL

`UtilsRL` is a reinforcement learning utility python package, which is designed for fast integration into other RL projects. Despite its lightweightness, it still provides a full set of functions needed for RL algorithms development. 

Currently `UtilsRL` is maintained by researchers from [LAMDA-RL](https://github.com/LAMDA-RL) group. Any bug report / feature request / improvement is appreciated.

## Installation
You can install this package directly from pypi:
```shell
pip install UtilsRL
```
After installation, you may still need to configure some other dependencies based on your platform, such as PyTorch.

## Features & Usage
See [the documentation](https://utilsrl.readthedocs.io) for details.

## Acknowledgements
We took inspiration for module design from [tianshou](https://github.com/thu-ml/tianshou) and [Polixir OfflineRL](https://github.com/polixir/OfflineRL).

We also thank [@YuRuiii](https://github.com/YuRuiii), [@cmj2020](https://github.com/cmj2002), [@paperplane03](https://github.com/paperplane03) and [@momanto](https://github.com/momanto) for their participation in code testing and performance benchmarking. 


