Development and Deadlines
May 16th: Beginning of the Project
May 20th: Title and Abstract
June 12th: Workload 1 - Basics of RL
1. Steve's RL Courses
2. Complete 3 small projects with OpenAI
June 26th: Workload 2 - DQN
1. Find readings about Generalization and DQN algorithm
2. Project I: DQN Cartpole
July 8th: Workload 3 - Atari
1. Pytorch Tutorials (part 1)
2. Introduction to Dockerfile and GPU
3. Project II: DQN Atari
July 17th: Workload 4 - Docker
1. Pytorch Tutorials (part 2)
2. Video Meta Reinforcement Learning
3. Project II: DQN Atari (testing with CPU)
4. Learning Docker
July 25th: Workload 5 - Doodad
1. Update repositories
2. Test Atari experiments with doodad locally
3. Create Lab account and docker image for rlkit:latest.
July 31th: Workload 6 - Setup for Experiments
1. Tutorials about Generalization in RL
2. Write Rough Draft for Presentation and Report
3. Create Virtual Machine in Ubuntu
4. Test Atari experiments with doodad and docker through local computer and ssh
5. Use the lab's GPU to run experiments
August 16th: Workload 7 - RL Experiments
1. Fix the GPU Issue for the Atari Experiments
2. Zero-Shot Meta-Learning and Discussion for future plans
3. Implementation of comet_ml
4. Testing on Breakout-v0
5. Report - Preliminary Version
August 25th: Workload 8 - Zero-Shot Meta Learning
1. Preprocessing and Neural Networks for the Policy
2. Atari Wrapper for a set of Atari environments
3. Read paper about Offline RL with IQL
4. Implementation of IQL
5. Epsilon-Greedy Algorithm Implementation
August 30th: Workload 9 - Final Steps
1. Write the report
2. Prepare Presentation
3. Clean the code and add comments
August 30th: Oral Presentation at 3 pm
August 30th: End of the Project