🪴 Quartz 4.0
Search
Search
Search
Dark mode
Light mode
Explorer
A General Theoretical Paradigm to Understand Learning from Human Preferences
A Minimaximalist Approach to Reinforcement Learning from Human Feedback
A Picture is Worth More Than 77 Text Tokens Evaluating CLIP-Style Models on Dense Captions
A Review of Adversarial Attacks in Computer Vision
A Survey on Personalized Content Synthesis with Diffusion Models
A Survey on Vision Mamba Models, Applications and Challenges
A Training-Free Plug-and-Play Watermark Framework for Stable Diffusion
Accelerating the Global Aggregation of Local Explanations
ACT-Diffusion Efficient Adversarial Consistency Training for One-step Diffusion Models
Advancing Parameter Efficiency in Fine-tuning via Representation Editing
Adversarial Diffusion Distillation
AEROBLADE Training-Free Detection of Latent Diffusion Images Using Autoencoder Reconstruction Error
Aligning Modalities in Vision Large Language Models via Preference Fine-tuning
Aligning Text-to-Image Diffusion Models with Reward Backpropagation
ALIP Adaptive Language-Image Pre-training with Synthetic Caption
AltDiffusion A Multilingual Text-to-Image Diffusion Model
An Image is Worth Multiple Words Learning Object Level Concepts using Multi-Concept Prompt Learning
An Image is Worth Multiple Words Multi-attribute Inversion for Constrained Text-to-Image Synthesis
Analysis of Classifier-Free Guidance Weight Schedulers
Any-Size-Diffusion Toward Efficient Text-Driven Synthesis for Any-Size HD Images
APLA Additional Perturbation for Latent Noise with Adversarial Training Enables Consistency
Applying Guidance in a Limited Interval Improves Sample and Distribution Quality in Diffusion Models
Asymmetry in Low-Rank Adapters of Foundation Models
Attention Calibration for Disentangled Text-to-Image Personalization
Backdooring Textual Inversion for Concept Censorship
Benchmarking the Robustness of Image Watermarks
Boosting Multi-modal Model Performance with Adaptive Gradient Modulation
Bridging Different Language Models and Generative Vision Models for Text-to-Image Generation
CAD Photorealistic 3D Generation via Adversarial Distillation
Can MLLMs Perform Text-to-Image In-Context Learning
Capability-aware Prompt Reformulation Learning for Text-to-Image Generation
CAT Contrastive Adapter Training for Personalized Image Generation
CatLIP CLIP-level Visual Recognition Accuracy with 2.7x Faster Pre-training on Web-scale Image-Text Data
CoDeF Content Deformation Fields for Temporally Consistent Video Processing
Compositional Generative Modeling A Single Model is Not All You Need
Compositional Text-to-Image Generation with Dense Blob Representations
Concept Sliders LoRA Adaptors for Precise Control in Diffusion Models
Concept Weaver Enabling Multi-Concept Fusion in Text-to-Image Models
Connecting NeRFs, Images, and Text
Consolidating Attention Features for Multi-view Image Editing
Context-Aware Meta-Learning
Contrastive Denoising Score for Text-guided Latent Diffusion Image Editing
Controllable Image Generation With Composed Parallel Token Prediction
Could It Be Generated Towards Practical Analysis of Memorization in Text-To-Image Diffusion Models
Cross-Attention Makes Inference Cumbersome in Text-to-Image Diffusion Models
Cross-Image Attention for Zero-Shot Appearance Transfer
Customizing Text-to-Image Models with a Single Image Pair
Deep Reward Supervisions for Tuning Text-to-Image Diffusion Models
DemoFusion Democratising High-Resolution Image Generation With No $$$
Demystifying CLIP Data
DiffHarmony Latent Diffusion Model Meets Image Harmonization
DiffiT Diffusion Vision Transformers for Image Generation
DiffMorpher Unleashing the Capability of Diffusion Models for Image Morphing
Diffusion Model Alignment Using Direct Preference Optimization
Diffusion Model as Representation Learner
Diffusion Model with Perceptual Loss
DiffusionLight Light Probes for Free by Painting a Chrome Ball
Direct Consistency Optimization for Compositional Text-to-Image Personalization
Direct Inversion Boosting Diffusion-based Editing with 3 Lines of Code
Directly Fine-Tuning Diffusion Models on Differentiable Rewards
Discrete Diffusion Modeling by Estimating the Ratios of the Data Distribution
Distilling Diffusion Models into Conditional GANs
DragNUWA Fine-grained Control in Video Generation by Integrating Text, Image, and Trajectory
DreamPropeller Supercharge Text-to-3D Generation with Parallel Sampling
DUAW Data-free Universal Adversarial Watermark against Stable Diffusion Customization
Dynamic Prompt Optimizing for Text-to-Image Generation
Dynamic Typography Bringing Text to Life via Video Diffusion Prior
Edit One for All Interactive Batch Image Editing
Editing Massive Concepts in Text-to-Image Diffusion Models
ELLA Equip Diffusion Models with LLM for Enhanced Semantic Alignment
Elucidating the Exposure Bias in Diffusion Models
Enhancing Diffusion Models with Text-Encoder Reinforcement Learning
Espresso Robust Concept Filtering in Text-to-Image Models
Evolutionary Optimization of Model Merging Recipes
Exploring Sparse MoE in GANs for Text-conditioned Image Synthesis
Exploring the Reasoning Abilities of Multimodal Large Language Models (MLLMs) A Comprehensive Survey on Emerging Trends in Multimodal Reasoning
Exponentially Faster Language Modelling
Eyes Wide Shut Exploring the Visual Shortcomings of Multimodal LLMs
FaceStudio Put Your Face Everywhere in Seconds
FIFO-Diffusion Generating Infinite Videos from Text without Training
FIND A Function Description Benchmark for Evaluating Interpretability Methods
Finding Visual Task Vectors
Fine-tuning CLIP Text Encoders with Two-step Paraphrasing
First Tragedy, then Parse History Repeats Itself in the New Era of Large Language Models
FouriScale A Frequency Perspective on Training-Free High-Resolution Image Synthesis
FreeU Free Lunch in Diffusion U-Net
Future Lens Anticipating Subsequent Tokens from a Single Hidden State
Generative Escher Meshes
Generative Image Dynamics
Generative Multimodal Models are In-Context Learners
GIVT Generative Infinite-Vocabulary Transformers
GLoD Composing Global Contexts and Local Details in Image Generation
Graph Neural Networks for Learning Equivariant Representations of Neural Networks
Griffin Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models
High-fidelity Person-centric Subject-to-Image Synthesis
HiPA Enabling One-Step Text-to-Image Diffusion Models via High-Frequency-Promoting Adaptation
Idempotent Generative Network
Implicit Style-Content Separation using B-LoRA
Improving Adversarial Attacks on Latent Diffusion Model
Improving Subject-Driven Image Synthesis with Subject-Agnostic Guidance
Improving Text-to-Image Consistency via Automatic Prompt Optimization
Inf-DiT Upsampling Any-Resolution Image with Memory-Efficient Diffusion Transformer
InstantID Zero-shot Identity-Preserving Generation in Seconds
Instruct Me More Random Prompting for Visual In-Context Learning
Interpreting CLIP's Image Representation via Text-Based Decomposition
Inversion-by-Inversion Exemplar-based Sketch-to-Photo Synthesis via Stochastic Differential Equations without Training
Iterated Learning Improves Compositionality in Large Vision-Language Models
KAN Kolmogorov-Arnold Networks
Kandinsky an Improved Text-to-Image Synthesis with Image Prior and Latent Diffusion
Large Language Models A Survey
Lazy Diffusion Transformer for Interactive Image Editing
LCM-Lookahead for Encoder-based Text-to-Image Personalization
Learning to Localize Objects Improves Spatial Reasoning in Visual-LLMs
Lego Learning to Disentangle and Invert Concepts Beyond Object Appearance in Text-to-Image Diffusion Models
Linearity of Relation Decoding in Transformer Language Models
LLM2Vec Large Language Models Are Secretly Powerful Text Encoders
Localized Symbolic Knowledge Distillation for Visual Commonsense Models
Localizing and Editing Knowledge in Text-to-Image Generative Models
LocInv Localization-aware Inversion for Text-Guided Image Editing
Long-CLIP Unlocking the Long-Text Capability of CLIP
LoRA+ Efficient Low Rank Adaptation of Large Models
LP++ A Surprisingly Strong Linear Probe for Few-Shot CLIP
Lumiere A Space-Time Diffusion Model for Video Generation
MagicTime Time-lapse Video Generation Models as Metamorphic Simulators
Make a Cheap Scaling A Self-Cascade Diffusion Model for Higher-Resolution Adaptation
MAS Multi-view Ancestral Sampling for 3D motion generation using 2D diffusion
Mask-ControlNet Higher-Quality Image Generation with An Additional Mask Prompt
MasterWeaver Taming Editability and Identity for Personalized Text-to-Image Generation
MetaCloak Preventing Unauthorized Subject-driven Text-to-image Diffusion-based Synthesis via Meta-learning
Mismatch Quest Visual and Textual Feedback for Image-Text Misalignment
Mitigate Replication and Copying in Diffusion Models with Generalized Caption and Dual Fusion Enhancement
Mixture-of-Depths Dynamically allocating compute in transformer-based language models
Model Inversion Attack via Dynamic Memory Learning
Model Lakes
MoEController Instruction-based Arbitrary Image Manipulation with Mixture-of-Expert Controllers
MoMA Multimodal LLM Adapter for Fast Personalized Image Generation
MVDream Multi-view Diffusion for 3D Generation
MyVLM Personalizing VLMs for User-Specific Queries
NEFTune Noisy Embeddings Improve Instruction Finetuning
NeuroPrompts An Adaptive Framework to Optimize Prompts for Text-to-Image Generation
No Token Left Behind Efficient Vision Transformer via Dynamic Token Idling
Object Recognition as Next Token Prediction
Object-Driven One-Shot Fine-tuning of Text-to-Image Diffusion with Prototypical Embedding
On Mechanistic Knowledge Localization in Text-to-Image Generative Models
On Model Explanations with Transferable Neural Pathways
On the Language Encoder of Contrastive Cross-modal Models
On the Scalability of Diffusion-based Text-to-Image Generation
One-step Diffusion with Distribution Matching Distillation
ORPO Monolithic Preference Optimization without Reference Model
Parameter-Efficient Fine-Tuning Methods for Pretrained Language Models A Critical Review and Assessment
PEA-Diffusion Parameter-Efficient Adapter with Knowledge Distillation in non-English Text-to-Image Generation
Perspectives on the State and Future of Deep Learning - 2023
PhotoVerse Tuning-Free Image Customization with Text-to-Image Diffusion Models
PixArt-$α$ Fast Training of Diffusion Transformer for Photorealistic Text-to-Image Synthesis
PixArt-Σ Weak-to-Strong Training of Diffusion Transformer for 4K Text-to-Image Generation
Preference Fine-Tuning of LLMs Should Leverage Suboptimal, On-Policy Data
Privacy Backdoors Enhancing Membership Inference through Poisoning Pre-trained Models
Probing the 3D Awareness of Visual Foundation Models
Prompt Switch Efficient CLIP Adaptation for Text-Video Retrieval
Quality Diversity through Human Feedback
Ranni Taming Text-to-Image Diffusion for Accurate Instruction Following
Recovering the Pre-Fine-Tuning Weights of Generative Models
ReFT Representation Finetuning for Language Models
Reinforcement Learning for Generative AI A Survey
ReNoise Real Image Inversion Through Iterative Noising
Rethinking the Spatial Inconsistency in Classifier-Free Diffusion Guidance
Return of Unconditional Generation A Self-supervised Representation Generation Method
Reward Guided Latent Consistency Distillation
RL for Consistency Models Faster Reward Guided Text-to-Image Generation
RLIPv2 Fast Scaling of Relational Language-Image Pre-training
Robust Concept Erasure Using Task Vectors
Scalable Extraction of Training Data from (Production) Language Models
ScaleCrafter Tuning-free Higher-Resolution Visual Generation with Diffusion Models
Score Distillation Sampling with Learned Manifold Corrective
SDXL-Lightning Progressive Adversarial Diffusion Distillation
SDXS Real-Time One-Step Latent Diffusion Models with Image Conditions
Self-correcting LLM-controlled Diffusion Models
Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models
Self-Rectifying Diffusion Sampling with Perturbed-Attention Guidance
Self-Rewarding Language Models
Sequential Modeling Enables Scalable Learning for Large Vision Models
Sherpa3D Boosting High-Fidelity Text-to-3D Generation via Coarse 3D Prior
SiT Exploring Flow and Diffusion-based Generative Models with Scalable Interpolant Transformers
Smooth Diffusion Crafting Smooth Latent Spaces in Diffusion Models
Sora A Review on Background, Technology, Limitations, and Opportunities of Large Vision Models
Space-Time Diffusion Features for Zero-Shot Text-Driven Motion Transfer
Speculative Streaming Fast LLM Inference without Auxiliary Models
Spiking-Diffusion Vector Quantized Discrete Diffusion Model with Spiking Neural Networks
Stable Video Diffusion Scaling Latent Video Diffusion Models to Large Datasets
State of the Art on Diffusion Models for Visual Computing
Stealing Part of a Production Language Model
Style Aligned Image Generation via Shared Attention
StyleDiffusion Controllable Disentangled Style Transfer via Diffusion Models
Stylus Automatic Adapter Selection for Diffusion Models
SwapAnything Enabling Arbitrary Object Swapping in Personalized Visual Editing
SwiftBrush One-Step Text-to-Image Diffusion Model with Variational Score Distillation
Testing Language Model Agents Safely in the Wild
TextCraftor Your Text Encoder Can be Image Quality Controller
The Chosen One Consistent Characters in Text-to-Image Diffusion Models
The Expressive Power of Low-Rank Adaptation
The Platonic Representation Hypothesis
The Truth is in There Improving Reasoning in Language Models with Layer-Selective Rank Reduction
TinyCLIP CLIP Distillation via Affinity Mimicking and Weight Inheritance
To Generate or Not Safety-Driven Unlearned Diffusion Models Are Still Easy To Generate Unsafe Images ... For Now
Toward effective protection against diffusion based mimicry through score distillation
Training Neural Networks from Scratch with Parallel Low-Rank Adapters
Transparent Image Layer Diffusion using Latent Transparency
TTD Text-Tag Self-Distillation Enhancing Image-Text Alignment in CLIP to Alleviate Single Tag Bias
Tutorial on Diffusion Models for Imaging and Vision
U-DiTs Downsample Tokens in U-Shaped Diffusion Transformers
UFOGen You Forward Once Large Scale Text-to-Image Generation via Diffusion GANs
Unified Concept Editing in Diffusion Models
UniFL Improve Stable Diffusion via Unified Feedback Learning
Using Captum to Explain Generative Language Models
Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model
V* Guided Visual Search as a Core Mechanism in Multimodal LLMs
Variational Schrödinger Diffusion Models
Video Diffusion Models A Survey
VideoBooth Diffusion-based Video Generation with Image Prompts
View Selection for 3D Captioning via Diffusion Ranking
Viewpoint Textual Inversion Unleashing Novel View Synthesis with Pretrained 2D Diffusion Models
Vision Mamba A Comprehensive Survey and Taxonomy
Vision-Language Models as a Source of Rewards
Visual Fact Checker Enabling High-Fidelity Detailed Caption Generation
Watch Your Steps Local Image and Scene Editing by Text Instructions
West-of-N Synthetic Preference Generation for Improved Reward Modeling
What do we learn from inverting CLIP models
When Do We Not Need Larger Vision Models
WWW A Unified Framework for Explaining What, Where and Why of Neural Networks by Interpretation of Neuron Concepts
xLSTM Extended Long Short-Term Memory
You Only Sample Once Taming One-Step Text-To-Image Synthesis by Self-Cooperative Diffusion GANs
Your Student is Better Than Expected Adaptive Teacher-Student Collaboration for Text-Conditional Diffusion Models
ZipLoRA Any Subject in Any Style by Effectively Merging LoRAs
Home
❯
tags
❯
Tag: discrete_models
Tag: discrete_models
1 item with this tag.
Jun 18, 2024
Controllable Image Generation With Composed Parallel Token Prediction
diffusion_model
gan
vq-vae
vq-gan
analysis
image_generation
compositionality
discrete_models
parallel_token_prediction
controllable_generation