add results figure
This commit is contained in:
parent
e6f9ace2e3
commit
70c5a6988e
2 changed files with 2 additions and 0 deletions
|
@ -19,6 +19,8 @@ To pre-process the Atari-HEAD data run [Preprocess_AtariHEAD.ipynb](Preprocess_A
|
||||||
## Intention-based Hierarchical RL Agent
|
## Intention-based Hierarchical RL Agent
|
||||||
The Int-HRL agent is based on the hierarchically guided Imitation Learning method (hg-DAgger/Q), where we adapted code from [https://github.com/hoangminhle/hierarchical_IL_RL](https://github.com/hoangminhle/hierarchical_IL_RL) <br>
|
The Int-HRL agent is based on the hierarchically guided Imitation Learning method (hg-DAgger/Q), where we adapted code from [https://github.com/hoangminhle/hierarchical_IL_RL](https://github.com/hoangminhle/hierarchical_IL_RL) <br>
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
Due to the novel sub-goal extraction pipeline, our agent does not require experts during training and is more than three times more sample efficient compared to hg-DAgger/Q. <br>
|
Due to the novel sub-goal extraction pipeline, our agent does not require experts during training and is more than three times more sample efficient compared to hg-DAgger/Q. <br>
|
||||||
|
|
||||||
To run the full agent with 12 separate low-level agents for sub-goal execution, run [agent/run_experiment.py](agent/run_experiment.py), for single agents (one low-level agent for all sub-goals) run [agent/single_agent_experiment.py](agent/single_agent_experiment.py).
|
To run the full agent with 12 separate low-level agents for sub-goal execution, run [agent/run_experiment.py](agent/run_experiment.py), for single agents (one low-level agent for all sub-goals) run [agent/single_agent_experiment.py](agent/single_agent_experiment.py).
|
||||||
|
|
BIN
supplementary/results_updated.pdf
Normal file
BIN
supplementary/results_updated.pdf
Normal file
Binary file not shown.
Loading…
Add table
Reference in a new issue