HiDream I1 FullOpen-source image generative foundation model with 17B parameters that achieves state-of-the-art image generation quality within seconds
image
1xRTX Pro 6000 WS
Loading...
HiDream AI
HiDream
MIT
HiDream I1 is an open-source image generation foundation model featuring 17 billion parameters that achieves state-of-the-art quality with rapid generation speeds. Released in May 2025, the model delivers industry-leading prompt adherence while maintaining exceptional versatility across diverse artistic styles from photorealistic imagery to cartoon and artistic renderings.
The model employs a sparse diffusion transformer architecture, detailed in the technical paper "HiDream-I1: A High-Efficient Image Generative Foundation Model with Sparse Diffusion Transformer." The system integrates multiple components for optimal performance:
The sparse transformer design enables the model to generate high-quality images within seconds while maintaining competitive computational requirements.
HiDream I1 demonstrates exceptional results across multiple evaluation frameworks:
GenEval Results (Overall Score: 0.83): Achieved the highest composite score among evaluated models, with perfect single object generation (1.00) and near-perfect two-object scenarios (0.98). Strong performance in color attribution (0.72) and counting accuracy (0.79).
DPG-Bench (Overall: 85.89): Leads in relation comprehension (93.74) and miscellaneous categories (91.83), demonstrating sophisticated understanding of object relationships and complex scene composition.
HPSv2.1 Benchmark (33.82 averaged): Surpasses leading competitors including Flux.1-dev (32.47) and DALL-E 3 (31.44) in human preference alignment, with particularly strong performance in animation style (35.05).
The model excels in several distinguishing areas:
HiDream I1 Full supports a wide range of image generation applications:
The model is available in three variants: full, dev (distilled), and fast (distilled), allowing users to select the appropriate balance between quality and computational efficiency for their specific use cases. The full variant provides maximum quality, while distilled versions offer accelerated inference for time-sensitive applications.
Choose a model and click 'Deploy' above to find available GPUs recommended for this model.
Rent your dedicated instance preconfigured with the model you've selected.
Start sending requests to your model instance and getting responses right now.