README.md 4.42 KB
Newer Older
1
2
# TEmporal Action Compositions for 3D Humans
```python interact_teach.py folder=/path/to/experiment output=/path/to/sample.npy texts='[step on the left, look right, wave with right hand]' durs='[1.5, 1.5, 1.5]'```
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
## Environment
Create the environment:

`python3.9 -m venv ~/.venvs/teach`

Activate it

`source ~/.venvs/teach/bin/activate`

Make sure `setuptools` and `pip` are the latest.

`pip install --upgrade pip setuptools`

 Install the packages:

`pip install -r requirements.txt`

20
21
22
23
24
25
26
27
Do this:

```shell
cd deps/
git lfs install
git clone https://huggingface.co/distilbert-base-uncased
cd ..
```
28
29
30
CUDA 10.2, Pytorch 1.11.0

## Data
31
32
Download the data from [AMASS website](amass.is.tue.mpg.de).

33
```shell
34
35
python divotion/dataset/process_amass.py --input-path /your path --output-path /out/path --model-type smplh --use-betas
```
36

37
38
Download the data from [BABEL website](babel.is.tue.mpg.de)[GET IT FROM ME]:

39
```shell
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
python divotion/dataset/add_babel_labels.py --input-path /is/cluster/nathanasiou/data/amass/processed_amass_smplh_wshape_30fps --out-path /is/cluster/nathanasiou/data/babel/babel-smplh30fps-gender --babel-path /is/cluster/nathanasiou/data/babel/babel_v2.1/
```

Softlink the data or copy them based on where you have them. You should have a data folder with the structure:
```
|-- amass
|   |-- processed_amass_smplh_wshape_30fps
|-- babel
|   |-- babel-smplh30fps-gender
|   |-- babel_v2.1
|-- smpl_models
|   |-- markers_mosh
|   |-- README.md
|   |-- smpl
|   |-- smplh
|   `-- smplx
```

Be careful not to push any data! To softlink your data, do:
59
60
61
62

`ln -s /is/cluster/nathanasiou/data`

## Training
63
To start training after activating your environment. Do:
64

65
66
67
68
69
70
`python train.py experiment=baseline logger=none`

Explore `configs/train.yaml` to change some basic things like where you want
your output stored, which data you want to choose if you want to do a small
experiment on a subset of the data etc.

71
## GENERATE SAMPLES
72
For sampling do.
73

74
`python sample_seq.py folder=/path/to/experiment align=full slerp_ws=8`
75
76
77
78
79

In general it is: `folder_our/<project>/<dataname_config>/<experimet>/<run_id>`

The folder should point to the output folder you 've chosen in `train.yaml` for out-of-the-box sampling. This will save joint positions in `.npy` files.

80
81
82
83
84
85
86
87
88
89
90
91
92
## EVALUATE
After sampling, to get numbers you can do:

`python eval.py folder=/is/cluster/work/nathanasiou/experiments/teach/babel-amass/babel-full/rot-5secs-full-sched-lr/ number_of_samples=3 fact=1.1 `

You need to point to the folder of the experiment only!

To submit a single experiment to cluster:
`python cluster/single_run.py --expname babel-default --run-id first-try-full --extras data=babel-amass`

Please follow this in `train.yaml`:

## BLENDER RENDERING
93
94
95
96
97
98
99
100
101
To render with blender an example is(BABEL):

`blender --background --python render_video.py -- folder=/is/cluster/nathanasiou/experimentals/teach/babel-amass/baseline/o234tnul/samples_skin/test/CMU/CMU/28/28_11_poses.npz-8593_allground_objs fast=false high_res=true`

To render with blender an example is(KIT):

`blender --background --python render_video.py -- folder=/is/cluster/nathanasiou/experimentals/teach/kit-mmm-xyz/baseline/2awlfcm9/samples/test fast=false high_res=true`


102
### Global configurations shared between different modules
103

104
- `experiment: the experiment name overall`
105

106
- `run_id: specific info about the current run` (wandb name)
107

108
## FOR CLUSTER SINGLE EXPERIMENT TRAINING:
109
110
111

`python cluster/single_run.py --expname babel-default --run-id debugging --extras data=babel-amass data.batch_size=123 --mode train`

112
## OR SAMPLING:
113
114

`python cluster/single_run.py --folder folder/to/experiment --mode sample`
115

116
117
118
## FAST RENDERING OF RESULTS 

For fast rendering(less than 30" / per video):
119

120
121
`python render_video_fast.py dim=2 folder=/ps/scratch/nathanasiou/oracle_experiments/kit-temos-version/1v6vh9w2/path/to/npys/files`

122
and check the durs key in the configuration to add durations. You can do `durs='[<dur1_in_secs>, <dur2_in_secs>, ...]'`. The video outputs will be saved in the absolute output directory.
123
124

Your experiments will be always structured like this(see `train.yaml`):
125
`<project>/<dataname>/<experiment>/<run_id>`
126
127
128
129
130
131


## BLENDER SETUP
`./python3.10 -m ensurepip`
`/is/cluster/work/nathanasiou/blender/blender-3.1.2-linux-x64/3.1/python/bin » ./python3.10 -m pip install moviepy`  
`blender or cluster_blender --background --python render.py -- npy=path/to/file.npy`
132
`python cluster/single_run.py --folder /some/folder/path/or/file --mode render --bid 100`