Loading...
Type a rough description of a scene and it will generate a short video matching this text. Google Imagen Video and Nvidia Align Your Latents are similar concepts.
Graeme Fulton explores how photo stocks work with generated images. New products like Prompt Hero & GhostlyStock emerge, while classic companies are taking little steps: Shutterstock partners with DALL-E 2, Adobe Stock opened submissions with limitations (you can't copy an existing style). Another review by Alina Valyaeva.
Matthew Ström generates Open Graph images for his blog.
This experiment shows how words like "assertive" and "gentle" are mapped to stereotypes and biases in models like Stable Diffusion and DALL-E 2 (review). Bloomberg has a good long-read article about this problem.
This text prompt-based generator can refine photos & images.
Aaron Hertzmann draws interesting parallels between today's algorithm-driven design tool boom and other branches of arts and culture for past centuries. He thinks current state is just interim and shows on-spot analogies.
This user interface tool was initially launched in 2017, but they announced a new version focused on algorithm-driven design. There's no preview yet.
Keima Kai looks at a typical website creation process and thinks about how each stage could be improved with algorithm-driven design.
Brilliant thinking by Alexander Wales on how tools like DALL-E and Midjourney influence professional illustrators and artists. It'll kick the economy of these professions in the stomach for sure. However, art as self-expression will stay for sure. Erik Hoel has a similar take.