Description
EP18 - I&S-ViT: An Inclusive & Stable Method for Pushing the Limit of Post-Training ViTs Quantization
2023-11-2003:06
EP17 - Rethinking Attention: Exploring Shallow Feed-Forward Neural Networks as an Alternative to Attention Layers in Transformers
2023-11-2003:03
EP16 - Emu Video: Factorizing Text-to-Video Generation by Explicit Image Conditioning
2023-11-2003:08
EP15 - SelfEval: Leveraging the discriminative nature of generative models for evaluation
2023-11-2003:04
EP14 - Distilling and Retrieving Generalizable Knowledge for Robot Manipulation via Language Corrections
2023-11-2003:17
EP13 - Camels in a Changing Climate: Enhancing LM Adaptation with Tulu 2
2023-11-2002:34
EP12 - MetaDreamer: Efficient Text-to-3D Creation With Disentangling Geometry and Texture
2023-11-2002:50
EP11 - Testing Language Model Agents Safely in the Wild
2023-11-2002:17
EP10 - Video-LLaVA: Learning United Visual Representation by Alignment Before Projection
EP9 - UnifiedVisionGPT: Streamlining Vision-Oriented AI through Generalized Multimodal Framework
2023-11-2002:57
EP8 - UFOGen: You Forward Once Large Scale Text-to-Image Generation via Diffusion GANs
2023-11-1903:02
EP7 - Adaptive Shells for Efficient Neural Radiance Field Rendering
2023-11-1903:05
EP6 - Contrastive Chain-of-Thought Prompting
2023-11-1902:58
EP5 - JaxMARL: Multi-Agent RL Environments in JAX
2023-11-1902:21
EP4 - Tied-Lora: Enhacing parameter efficiency of LoRA with weight tying
2023-11-1903:49
EP3 - Open-Sourcing Highly Capable Foundation Models: An evaluation of risks, benefits, and alternative methods for pursuing open-source objectives
2023-11-1902:40
EP2 - The Chosen One: Consistent Characters in Text-to-Image Diffusion Models
2023-11-1902:25
EP1 - ML-Bench: Large Language Models Leverage Open-source Libraries for Machine Learning Tasks
2023-11-1902:17
0.5x
0.8x
1.0x
1.25x
1.5x
2.0x
3.0x
Sleep Timer
Off
End of Episode
5 Minutes
10 Minutes
15 Minutes
30 Minutes
45 Minutes
60 Minutes
120 Minutes