Organizing my research

Productivity and organization

This week I started organizing my research notes and tasks in my to-do app (I use TickTick). Using an app has from the start be a fundamental way to never lose a new idea, when it comes to me. However, after a few months, I already have a huge list of things to do and no simple way of seeing how to do everything.

Truth is that is research is like a hydra, the more you do the more there is to be done. New ideas surge and resurge, and there is indeed a lot I’ve done, I am doing and I wish to do. Moreover, I have no control on the papers I read or that I have read. I sometimes write the main notes on a task or make a note, but it was becoming really confusing to track down.

It still is confusing to parse the huge list of nested tasks, but I see a path now. I’ve separated my tasks into a Research Backlog and a Research Board: the backlog is a deposit for all ideas, tasks and things that come to mind. The board is what I plan to do this week or in the near future and uses a kanban set of 3 simple columns: tasks, active and completed. With this division, I can at least have a sense of what to do every week without having to think so much in the long run.

On the other hand, planning on a larger time frame is definitely important. For now, however, ideas still lie in a sort of data lake, which I will organize slowly. As I said, controlling the research direction in the longer term is difficult, and I am still developing the expertise to do so, but I believe these boards, together with the journaling in this website will help me.

Accomplishments

This past week has been full of stuff to do, and I was getting quite nervous because of that: interviews, driving tests, bureaucracy, research, thoughts. It’s been hard to concentrate, but Japan definitely helps, I believe.

Research-wise, I was able to rework the augmentation process for ROSA-GDS, which now uses only two augmentations: a weak and a strong one via the RandAugment class from torchvision. This process seems to make more sense, but without the proper metrics it is hard to verify.

In that sense, I was able to rebuild the interpreter evaluation system, which now works very well with cudf.DataFrame. I then adapted Insertion, Deletion, Overall, Consistency and Minimal Size metrics to work by default and added Random and Center interpreters to measure.

One other thing I have yet to do is to set visualization and evaluation tracking optional, for it is unnecessary to save so much data when I my goal is just to capture the evaluations on the whole dataset. However, together with this, I think I should also add the option to save a portion of the data only, for I believe soon I will need to track the visualizations during training, but surely cannot do such for the whole dataset (maybe this is over engineering, but I still gotta think it through).

Finally, the ResNet imagenet experiments finished (after 10days for ResNet-50) and I was able to arrive at 74% test accuracy in 30 epochs or so. This is fine for now, so I am going to start looking for a ViT counterpart ImageNet recipe. One thing that might become a problem is that my ResNet recipes were made to run on single-GPU systems, so there is still work to be done on adapting them to multi-GPU, maybe a simple batch-size adapter callback will do the job, although there will be no speedup that way. With that in mind, I am running the current ViT baseline on 8 GPUs by default and using the TokenHalting technique I developed (I also need to write about that somewhere…). Hopefully it stops crashing and the results are satisfactory for a first run (currently around 50% validate accuracy with 10 epochs)

For ResNet ViT, I want to try token halting, token dropout and similar approaches with their fixed and progressive versions, which are likely to render a fast yet robust ImageNet recipe.

All in all, it was a productive week, and I am keeping a positive attitude towards the weeks to come. 😊