The idea to fully replace a designer with an algorithm sounds futuristic, but the whole point is wrong. Product designers help to translate a raw product idea into a well-thought-out user interface, with solid interaction principles and a sound information architecture and visual style, while helping a company to achieve its business goals and strengthen its brand.

Designers make a lot of big and small decisions; many of them are hardly described by clear processes. Moreover, incoming requirements are not 100% clear and consistent, so designers help product managers solve these collisions — making for a better product. It’s much more than about choosing a suitable template and filling it with content.

However, if we talk about creative collaboration, when designers work “in pair” with algorithms to solve product tasks, we see a lot of good examples and clear potential. It’s especially interesting how algorithms can improve our day-to-day work on websites and mobile apps.

Album covers processed through Prisma and Glitché (Hover to see how the images transform)

Album covers processed through Prisma and Glitché (View large version)

Creative Collaboration With


Designers have learned to juggle many tools and skills to near perfection, and as a result, a new term emerged, “product designer.” Product designers are proactive members of a product team; they understand how user research works, they can do interaction design and information architecture, they can create a visual style, enliven it with motion design, and make simple changes in the code for it. These people are invaluable to any product team.

However, balancing so many skills is hard — you can’t dedicate enough time to every aspect of product work. Of course, a recent boon of new design tools has shortened the time we need to create deliverables and has expanded our capabilities. However, it’s still not enough. There is still too much routine, and new responsibilities eat up all of the time we’ve saved. We need to automate and simplify our work processes even more. I collected many use cases for this.

1. Website

The Grid CMS

It chooses templates and content-presentation styles, and it retouches and crops photos — all by itself. Moreover, the system runs A/B tests to choose the most suitable pattern.

Wix Advanced Design Intelligence

Looks similar to The Grid’s semi-automated way of enabling non-professionals to create a website. Wix teaches the algorithm by feeding it many examples of high-quality modern websites. Moreover, it tries to make style suggestions relevant to the client’s industry.

Framer AI

The tool expanded to algorithm-driven design space (how it works). It can generate a screen or a series of screens via text prompt. You can alter visual style for each part of these screens.


UIzard experimental tool generates HTML or native mobile code from a UI screenshot with over 77% of accuracy. Here's the next iteration of the idea by Emil Wallner.

uKit AI

It analyzes an existing website and rebuilds in on its own technology with a proposed new design. It's a good way for SMBs who are not designers to make their websites better. More about it.

Artificial Intelligence and the Future of Web Design

Fred O'Brien describes the current state of algorithm-driven website constructors. He interviewed many of their creators.

2. Page

Flipboard Duplo

An automated magazine layout system. A script parses an article. Then, depending on the article’s content (the number of paragraphs and words in each, the number of photos and their formats, the presence of inserts with quotes and tables, etc.), the script chooses the most suitable pattern to present this part of the article.

A home page generator using similar ideas. The algorithm finds every possible layout that is valid, combining different examples from a pattern library.

Next, each layout is examined and scored based on certain traits. Finally, the generator selects the “best” layout — basically, the one with the highest score.


This tool combines lots of great concepts: transform sketches & screenshots into editable design, apply a style from screenshot to mockups, and generate user interface screens via text prompt. It also has common design and prototyping features. GPT-4 can do it too.

Galileo AI

It generates user interface screens via text prompt. You can edit these mockups in Figma, as well as generate additional illustrations and copy.


A series of Figma companion tools like Genius by algorithm-driven design enthusiast Jordan Singer and his team. Figma acquired them and plans to integrate it into the main product.


An experiment by Adobe and University of Toronto. The tool automatically refines a design layout for you. It can also propose an entirely new composition. Similar ideas can be found in Sketchplore, MS PowerPoint, and Google Slides.

Airbnb: Sketching Interfaces

It analyzes rough hand-drawn sketches with machine learning and builds a screen using unified components. The fastest way from a rough idea to a working prototype. Uizard and Microsoft have a similar concepts (update), while TeleportHQ turns it into a code. A simpler Sketch plugin.

UI Bot

The best example of an algorithm-driven design tool applied to UIs by Janne Aukia. It generates stylistic variations of a dashboard and even tries layout changes.


An experimental tool to create your own simple algorithm-driven design tools. You can parametrize how it generates mockups.


An experiment by Hayk An allows tweaking of a user interface based on tokens. You can get a random combination of parameters, so it's more like a toy. See its first version.


The tool uses a sketch to generate mockups in HTML. They could be built on design system components. It's made by ex-Airbnb designers.

Studio AI

This user interface tool was initially launched in 2017, but they announced a new version focused on algorithm-driven design. There's no preview yet.


A presentation tool. It can generate slides and illustrations for them using a list of key topics. There are similar tools like Gamma.


A simple tool generates presentations in the form of mobile websites.

Microsoft Power Pages

Copilot is a part of this website creation tool. It can generate texts, forms with business logic, page layouts, etc.

3. Component

Interpolation by Florian Schulz

He shows how you can use the idea of interpolation to create many states of components.


Algorithm-Driven Design enthusiast Jack Qiao launched an all-in-one tool which lets you to design a logo, simple brand identity, and a UI kit in with that visual style in code. He describes how logo and font generator works. Check logo comparison and crunch tools too. Automagic Design does the same.

BillUI Sketch Plugin

Simon Takman used Jon Gold ideas to generate multiple variations of a UI component. It alters corner radius, color, border, and shadow.

Components AI

Another ex-member of The Grid team launched an experimental tool that generates design system tokens. See his other experiment and generative logo design process.

1. Thumbnails / Posters / Mockups


A script crops movie characters for posters, then applies a stylized and localized movie title, then runs automatic experiments on a subset of users. Real magic! A new version started to personalize a poster image for different users (i.e., to show a particular actor or just a mood).

Yandex Market (in Russian)

A promotional image generator for e-commerce product lists. A marketer fills a simple form with a title and an image, and then the generator proposes an endless number of variations, all of which conform to design guidelines.

Nutella Unica

The algorithm pulled from a database of dozens of patterns and colours to create seven million different versions of Nutella's graphic identity, which have been splashed across the front of jars in Italy.

Alibaba LuBan

One-click intelligent design image generation, intelligent layout, size expansion, color expansion, and other design services. Users can generate multiple sets of design solutions that meet the requirements in real time by simply inputting the desired style and size. More about it.

Generative art Open Graph preview images

Matthew Ström generates Open Graph images for his blog.

Microsoft Designer

A simple design tool from Microsoft. It generates images with DALL-E 2 and mockup layouts by text prompt. How it works. They have several experimental tools like that (e.g. Bing Image Creator).

2. Copywriting


OpenAI's chat-based tool generates various forms of text — general answers, creative copy, code, and many more (this is how it works). Designers discuss professional topics, try usability testing scripts, search research reports, simplify JTBD creation, document design system components, write books, explore interaction ideas, and create Figma plugins. First experiments started with GPT-3 in 2020 like Sharif Shameem's React app, or Figma plugins by Jordan Singer and Dhvanil Patel, or color palettes by Harley Turan. People also generate startup ideas, newspapers, programming languages, and other crazy stuff. Plugins and API gave even more possibilities. Many companies integrate it via Microsoft Azure like Maya 3D tool. Others supercharge existing chat/voice assistants like Duolingo or Mercedes. Some just generate texts like LinkedIn.

Be cautious about the quality, as results are often a word salad. They're often good as drafts that you need to edit and finish yourself (MIT research shows it can increase productivity). Avoid using it for generating strategy or faking user research. Anyway, it's a threat to Google search, as you can get answers right away, not just links (see how Opera does it). Researchers try to detect texts by ChatGPT.


An editor has nurtured a robot apprentice to write simple news articles about new gadgets. Whew!

Assisted Writing

Samim Winiger tries to re-imagine word-processing software. It explores new forms of writing, that allow authors to shift their focus from creation to curation, and write more joyfully.


Algorithm-driven writing assistant. It learns company's tone of voice and helps to generate ideas & drafts, edit human-made text, and prepare versions for different distribution channels. There are lots of tools like that now: Jasper,, CopyMonkey, AI SEO, and many more.


Russell Davies made an experimental project that generates corporate taglines. In another project Janelle Shane generated craft beer names.

MS Word Resume Assistant

The add-on analyzes LinkedIn to show relevant CV examples, identify top skills, customize a resume based on real job postings, and get additional help.

Alibaba AI Copywriting Tool

An AI-enabled Chinese language copywriting tool that it says passed the Turing test and can produce 20,000 lines of copy a second.

Notion AI

Notion helps to generate a blog post, newsletter, or another popular type of text. It's embedded in an already popular tool, which is more convenient.


A generator of cliched inspirational quotes.


It can generate analytics reports and news headlines, as well as help with financial analysis and stock market trend reports.

Shopify Magic

It generates product descriptions.

3. Imagery: Photos, Icons, Illustrations, Patterns


The tool creates realistic photos and illustrations from a text description based on GPT-3. It started simple, but then became one of the hottest movements. Its simplified version exploded into a meme generator (see also Wordle mashup). There are prompt generators for better results (more like that), although these personal explorations (another one) are amazing. There are similar concepts like Google Imagen or Disco Diffusion. Professional illustrators discuss risks & possibilities, but not all of them are worried. Microsoft put a serious bet on it and included DALL-E into Azure Cloud.


The tool creates realistic photos and illustrations from a text description. It's one of the most popular, as it was the first freely & publicly available. People generate book illustrations, brand illustrations, fit their photos to illustrations and do other great stuff. There are prompt generators for better results.

Stable Diffusion

The tool creates realistic photos and illustrations from a text description (how it works). It's open source, which led to enormous popularity. As a result, they have numerous user interfaces like web, iOS, macOSWindowsFigma. There are specialized branches (e.g. for textures, 3D, photo editing). Fan projects blossom too (e.g. modernizing MS-DOS games or Fallout 2, making illustrative QR-codes). Established design tools implement it too (see Canva and Blender).


Neural network-based app that stylizes photos to look like works of famous artists. This one makes a classic portrait (more and more like this) while Google Stadia does the same for games.

Google AutoDraw

An experimental projects turns sketches into icons. It can help non-designers to use quality icons in their mockups. Researches turned it other way around — algorithms makes human-like sketches.

Sketch Confetti

A plugin generates modern confetti patterns that fit into existing screen mockup.

Photorealistic Facial Expression Synthesis

Yuqian Zhou & Bertram Emil Shi generated various facial expressions from a single photo. It's a complex problem because we still need to identify a user in these states. HyperStyle can alter the age.

Microsoft AI Drawing Bot

This bot generates images from caption-like text descriptions. It can be everything from ordinary pastoral scenes, such as grazing livestock, to the absurd, such as a floating double-decker bus.


This browser-based tool aggregates lots of modern utilities: image generation, background or object removal for photos & videos, etc. They come with basic tools like animation or image filters. They also help filmmakers to experiment with these tools (see an example) and they experiment with video generation themselves. Felicis investment fund has lots of assets in prompt-driven design (e.g. Poly for textures in 3D modeling).

AI Stock Images: The Uncanny Valley from ShutterStock to StockAI

Graeme Fulton explores how photo stocks work with generated images. New products like Prompt HeroGhostlyStock emerge, while classic companies are taking little steps: Shutterstock partners with DALL-E 2, Adobe Stock opened submissions with limitations (you can't copy an existing style). Another review by Alina Valyaeva.

Guardian Headliner

An experiment with BERG design studio to highlight eyes in a photo to emphasize emotion.


The app can process video through neural networks (even streaming video).

“Reverse Prisma”

Researchers from UC Berkeley converts impressionist paintings into a more realistic photo style.


Cambridge Consultants made a tool for illustrators that transforms rough sketches into a painting from Van Gogh, Cézanne, or Picasso.

Google Storyboard

The mobile app transforms videos into single-page comic layouts. It automatically selects interesting video frames, lays them out, and applies one of six visual styles.

Google Maps

The product team added buildings and areas of interest even for smallest cities using satellite and street views photos. Their 3D models are so detailed that you can sometimes see the blades inside the rooftop fans. Looks like Apple does it manually.

Perception Engines

An algorithm by Tom White draws abstract illustrations of real world objects. It's trained on photos and the result is close to usable in real products. It's a part of Google Artists and Machine Learning initiative (read its blog).

Generating custom photo-realistic faces using AI

An experimental tool by Shaobo Guan generates realistic photos of people. You can alter gender, age, race, and some facial details. StyleGAN from Nvidia is one of many similar tools. There's even "Hot or Not"!

This Person Doesn't Exist Sketch Plugin

Stas Kulesh made a plugin that puts AI-generated faces into design mockups. It's a great application of a popular idea. Here's also a website (see similar websites for cats and rentals). Cautionresearchers backtracked these photos to find original people (see the research paper).

Fake It Till You Make It — Face analysis in the wild using synthetic data alone

These faces are actually generated, not just tweaked real photos of people. It's a major privacy risk — researchers backtracked photos from a popular face generator to find original people.


A Figma plugin by Jordan Singer. It generates icons, illustrations, and copy (authors promise more tools).


This text prompt-based generator can refine photos & images.

Adobe Firefly

This Sensei-based tool generates images via text prompt. You can edit it with all the recent methods they announce at the MAX conference.


it promises to generate illustrations in your own style. It needs just 5 examples for training.


This project compares image generation results from popular algorithm-driven tools using the same text prompt.


It can generate 3D objects, animations, and textures via text prompt. You can also stylize them.

Bubble Face

A tool by a comic book publisher can stylize any photo into their visual style.

Dream AI

Abstract art generator via text prompt. It's one of the most spectacular and it's launched before DALL-E-like models.

TikTok Backgounds

They added an option for generative backgrounds for videos. It's a huge audence for the concept.

Generated Humans

A huge gallery of generated full size people. They also have gazillion avatars.

Lensa Magic Avatars

A simple way to stylize avatars in one of popular illustration styles. It became possible with DreamBooth technique from Google. Lots of similar tools like Avatar AI & AI Profile Picture Generator exist.

Meta Make-A-Video

Type a rough description of a scene and it will generate a short video matching this text. Google Imagen Video and Nvidia Align Your Latents are similar concepts.


Another tools from OpenAI creates a 3D object via a text prompt. See their other project Shap-E and similar tools like Neural3D.


It helps to generate 3D objects for virtual worlds. You can get a whole series of cars, animals, furniture, or other items that are varied, but still a part of a family.

4. Basics: Typography, Color

Variable Fonts

Parametric typography based on the idea of interpolation from several key variables: weight, width, and optical size. In 2016 it became a part of OpenType format specification. Previously, it was only possible to use variable fonts on the web through hacks or via desktop tools like Robofont.


A tool that helps you to pair fonts using font vectors. Here's another usage of this idea from Kevin Ho who built a Font Map.

Background-Aware Color

A script that changes text color according to the background color.

Yandex Launcher

This Android launcher uses an algorithm to automatically set up colors for app cards, based on app icons.

Huula Typesetter

Huula website constructor did several experiments with auto-suggesting font sizes and page colors. See also CSSToucan.

Adobe Fontphoria

Another Sensei experiment turns any letter image into a glyph, then creates a complete alphabet and font out of it. It can also apply the result to a physical object via augmented reality.

5. Animation

Microsoft Animation Autocomplete

An experimental tool for autocompleting illustrations and animations. Shadow Draw is a similar concept.

Photo Wake-Up

An experimental tool can animate a character from a photo. They can walk out, run, sit, or jump in 3D. See also an experiment from Samsung that makes photos talk.


This tool changes style of the whole video. You can describe new style via text prompt.

1. Content

Anticipatory Design

It takes a broader view of UX personalization and anticipation of user wishes. We already have these types of things on our phones: Google Now and Siri automatically propose a way home from work using location history data However, the key factor here is trust. To execute anticipatory experiences, people have to give large companies permission to gather personal usage data in the

Airbnb Smart Pricing

The team learned how to answer the question, “What will the booked price of a listing be on any given day in the future?” so that its hosts could set competitive prices.

Ask Luke Wroblewski

Luke Wroblewski added a ChatGPT-based search to his blog. It answers with Luke's thoughts from 25 years of blog posts, videos, and podcasts.

Giles Colborne on AI

Advice to designers about how to continue being useful in this new era and how to use various data sources to build and teach algorithms. The only element of classic UX design in Spotify’s Discover Weekly feature is the track list, whereas the distinctive work is done by a recommendation system that fills this design template with valuable music.


An algorithm that deploys individualized phrases based on what kinds of emotional pleas work best on you. They also experiment with UI.

2. Layout

Mutative Design

A well-though-out model of adaptive interfaces that considers many variables to fit particular users by Liam Spradlin. Here's another application of this idea by researchers from Aalto & Kochi Universities.

Salesforce Einstein Designer

This engine personalizes user interface elements based on user browsing history and preferences. I.e., a product card in e-commerce could highlight different information. More about criteria it takes into account, analysis process, and prototyping.

1. Branding & Identity Elements


A product to replace freelancers for a simple logo design. You choose favorite styles, pick a color and voila, Logojoy generates endless ideas. You can refine a particular logo, see an example of a corporate style based on it, and order a branding package with business cards, envelopes, etc. It’s the perfect example of an algorithm-driven design tool in the real world! Dawson Whitfield, the founder, described machine learning principles behind it. Logoshuffle and My Brand New Logo are similar tools. Even Fiverr launched their own tool, but it's ethically questionable.

Generative Branding: Oi

Wolff Olins presented a live identity for Brazilian telecom Oi, which reacts to sound. You just can’t create crazy stuff like this without some creative collaboration with algorithms.

MIT Media Lab Logo

An algorithm can create 40,000 logo shapes in 12 different color combinations, providing the Media Lab an estimated 25 years’ worth of personalized business cards. However, they struggled to use that in real life and simplified it later.


Pentagram developed a shape generator which allows Graphcore’s internal team to create infinite patterns that illustrate their website content, presentations and more. The generator is part-random and part-weighted and is similar to the system developed for Graphcore’s animations, which are used across digital touchpoints. How to recreate it with JavaScript.


This research paper describes a system that can craft logos from 12 different colors. You can't try it online, but here's the repository.

Puerto Rico National Identity (Art Project)

An interactive algorithmic installation for Puerto Rico national identity by Muuaaa design agency. They defined 45 culture markers and let people to construct their own flags.

School of Visual Arts Senior Library 2018

A generative color-based design in a book showcasing the work of graduating design students from the School of Visual Arts in New York.

GBA Logo

This logo constantly redraws itself. Designer Talia Cotton wanted to create an unbiased logo, so she put an algorithm to work.


You can generate and edit icons and illustrations. It combines text prompt and fine-tuning with parameters.

Adobe Illustrator Generative Recolor

It can change color schemes of complex vector-based illustrations.

2. Photo & Video Editing

Photoshop Content-Aware Crop

2016 release of Photoshop has a content-aware feature that intelligently fills in the gaps when you use the cropping tool to rotate an image or expand the canvas beyond the image’s original size.

Photoshop Generative Fill

It allows to select any part of an image and add any object here or just automatically extend the whole scene (how it works and what it can do). It's one of the best implementations of algorithm-driven design features in a design tool — it's built into existing workflow, it's not a separate mode. Other tools like ClipDrop Uncrop caught up quick to do the same thing.

Drag Your GAN

It can change an object on a photo with precise control over where pixels go, thus manipulating the pose, shape, expression, and layout of diverse categories such as animals, cars, humans, landscapes, etc.

Photoshop Scene Stitch

Photoshop added another content-aware feature — it replaces a whole part of a photo with a relevant piece from Adobe Stock collection. No need for intense retouching anymore. Another experiment is Project Cloak which replaces objects on videos. See more MAX 2017 announces based on Adobe Sensei platform (Puppetron and PhysicsPak are the best).

Fast Mask

Another Sensei experiment that selects and tracks an object in a video. I.e. you can put a text behind a dancer. See more MAX 2018 announces (Smooth Operator and Good Bones are the best).

AI Imaging Technique Reconstructs Photos with Realistic Results

Researchers from NVIDIA introduced a deep learning method that can edit images or reconstruct a corrupted image, one that has holes or is missing pixels.

GANPaint Studio

The tool draws with semantic brushes that produce or remove units such as trees, brick-texture, or domes.

Vector Edge

Another Sensei experiment that puts graphic assets on top of raster packaging mockups. See more MAX 2022 announces (Instant Add, Magnetic Type, Motion Mix and Made in the Shade are the best).


An experimental algorithm re-creates famous paintings step by step. The tool is trained on screencasts of real artists.

Adobe Express

A simple graphic design tool supercharged by Adobe Firefly algorithm-driven platform. It can generate illustrations and typefaces for mockups and videos, as well as resize them automatically.

Nestle Milkmaid

Ogilvy agency extended Johannes Vermeer’s painting “The Milkmaid” with DALL-E 2 for “La Laitière” brand.


Unreal engine add-on lets you generate realistic humans. You get a detailed 3D model in motion.

Object-centric vs. Canvas-centric Image Editing

To date, most digital image and video editing tools have been canvas-centric. Advances in artificial intelligence, however, have started a shift toward more object-centric workflows. Luke Wroblewski several examples of this transition.

3. From Sketch to Object


An experimental tool that creates a 3D model out of sketch.

Nvidia GauGAN

The software will instantly turn a couple of lines into a gorgeous mountaintop sunset. This is MS Paint for the AI age. More about it. V2 can use a text prompt to generate images. It transformed into Nvidia Canvas tool later. Scribble Diffusion is a similar idea.

Stable Doodle

A sketch-to-image tool by Stable Diffusion makers that converts a simple drawing into a dynamic image (more about it).


It turns an image of a human into animated 3D models.

4. Generative Design & Art


One of the oldest generative design & art tools. It's used by many famous people like Joshua Davis.

Generative Visual Manipulation on the Natural Image Manifold

It helps to refine fashion design. You can sketch changes to a bag or a shoe and see how it could look in a real product.

Drawing Operations Unit: Generation 2

Sougwen Chung creates collaborative art with her robot. It learns from her style of drawing, turning her practice of art-making into a real-time duet.

Early Computer Art in the 50’s & 60’s

A phenomenal exploration of generative experiments history in computer art by Amy Goodchild.


A collaborative generative art tool where users remix each others images. You can trace history of each image.

The Rise of Long-Form Generative Art

Tyler Hobbs talks about generative art movement in the NFT community. They use artistic algorithms (although it leads to the ocean of sameness). He mentions Artblocks platform and his own Fidenza project.


Fish drawings generator.

Shan, Shui

Procedurally-generated vector-format infinitely-scrolling Chinese landscape for the browser. It's inspired by traditional Chinese landscape scrolls.


A generative encyclopedia of imaginary sea creatures. It consists of an infinite numbers of potential underwater life forms.

5. Platforms

Adobe Sensei

A smart platform that uses Adobe’s deep expertise in AI and machine learning, and it will be the foundation for future algorithm-driven design features in Adobe’s consumer and enterprise products: semantic image segmentation, font recognition, and intelligent audience segmentation. Scott Prevost sees 3 ways to apply it to designers workflow.

1. Industrial Design & Architecture

Autodesk Dreamcatcher

A tool based on the idea of generative design:

1. An algorithm generates many variations of a design using predefined rules and patterns.
2. The results are filtered based on design quality and task requirements.
3. Designers choose the most interesting and adequate variations, polishing them if needed.

It made a lot of noise and prompted several publications from UX gurus. Autodesk built a new Toronto office using these ideas.

Toyota Research Institute Unveils New Generative AI Technique for Vehicle Design

It's based on a common algorithm-driven design process, made in a proper way. Designers provide initial design sketches and engineering constraints. Then an experimental tool proposes several options in a refined way. After that, designers finalize the best version.

Parametric Design

Zaha Hadid Architects bureau uses this term to define their generative approach to architecture.

Who Is Winning the 3D Printing Battle in Footwear & Why?

A review of high-tech shoe initiatives by Nike, adidas, New Balance, and Under Armour.

The incredible inventions of intuitive AI

A terrific talk by Maurice Conti at TEDx about algorithms in industrial design, architecture, and wicked problem solving.

Interior AI

This tool creates interior design drafts using via text prompt. Finch & PlanFinder can help with floor plans.

2. Entertainment: Games, Movies, Music, TV

No Man's Sky

Nearly all elements of the game are procedurally generated, including star systems, planets and their ecosystems, flora, fauna and their behavioural patterns, artificial structures, and alien factions and their spacecraft.

Cognitive Movie Trailer

IBM Watson helped 20th Century Fox to create en engaging movie trailer. Looks like they're using this experiment at scale now.

Is the future of music artificial?

Flow Machines computer scientists unveiled the first song to be composed by artificial intelligence, the Beatles-esque "Daddy’s Car". See also how Nao Tokui added background noise to Google Street View scenes.

Computational Video Editing

It automatically selects the most appropriate clip from one of the input takes, for each line of dialogue, based on a user-specified set of film-editing idioms. A final cut is a good draft for an editor.


It allows you to create a digital voice that sounds like you with only one minute of audio.

Chinese news agency adds AI anchors to its broadcast team

They have the likeness of some of Xinhua's human anchors, but their voices, facial expressions and mouth movements are synthesized and animated using deep learning techniques.

Nvidia AI City

A driving simulator that allows you to pilot a car through a city, from its buildings to its streets to its cars, is being created in real time by AI.


A powerful example to stylize videos. It's great to test art directions for a movie quickly. How it works.


Google's experiment generates music in various genres by text description. However, it's not publicly available, as they're cautious of plagiarism. Although Meta has opened their MusicGen tool.

Synthesizer V

It can generate vocal for your music. It simplifies demo making.

TikTok Ripple

Users can directly sing or hum a melody into the app, after which it will use machine learning to expand the melody and turn it into an instrumental song.

GitHub Copilot

It helps to autocomplete code. It's built into popular IDEs like Visual Studio Code and JetBrains; you can voice-control it too. They analyzed GitHub repositories for training (however, it's controversial to sell open source work results — which led to a class-action). CopilotX is even more powerful.

Nvidia GANcraft

This experiment turns Minecraft worlds into photorealistic scenes.

Apple NeuMan

A deepfake algorithm for augmented reality — it generates any character moves based on 10 secords video.

NVIDIA Broadcast: Eye Contact

This video streaming tool can imitate human eye contact live.

Intercom Fin

Intercom built their customer support bot on top of ChatGPT (how it works).

Microsoft 365 Copilot

Microsoft 365 apps like Word, Excel, PowerPoint, Outlook & Teams can generate and analyze documents. It's a great productivity boost. Google Docs goes a similar path. Microsoft pushes this concept to other major and minor products, starting from Windows itself.

Salesforce Einstein GPT

Salesforce integrated GPT by OpenAI into their CRM. It helps their sales, service, marketing, commerce, and data products to generate, personalize, and analyze content & data.

Tales of Syn

An AI-assisted RPG videogame and comic. It's generated using a set of tools, including Stable Diffusion.


Dovetail UX research knowledge base simplifies insights analysis. It can propose common themes, draft insight descriptions, and many more.

Atlassian Intelligence

The assistant is built into key products like Jira and Confluence. It can write meeting notes, search via human-like phrase and summarize results, co-write tasks, etc (see the overview).


This camera has no lens — it generates a photo using geolocation data.

TikTok Script Generator

It generates ad scripts. They include specific directions for the scenes, including the voiceover, on-screen visuals, and text overlays you could use.

Prime Voice AI

It can clone your voice and narrate any text using it.

3. Art

AI-Curated Exhibition

Tate Modern and Microsoft collaborated on an exhibition where a machine learning algorithm selected artworks from museum's collection. Philipp Shmitt did a similar experiment with a photo album.

AI will be the art movement of the 21st century

Rama Allen says that human/AI collaboration is an aesthetic dialogue similar to that employed with improvisational jazz. It's a great thinking on what modern art could be.

1. Machine Learning & AI for Designers

Machine Learning for Designers

O’Reilly published a great mini-book by Patrick Hebron with machine learning basics and design examples. He also has a great vision about new design tools.

A Visual Introduction to Machine Learning

A great visual overview of machine learning basics by Stephanie Yee and Tony Chu.

Experience Design in the Machine Learning Era

A terrific article by Fabien Girardin. He shows how designers can work together with big data analysts to benefit from machine learning.

Artificial intelligence and the future of design

Jon Bruner gives a good example: A genetic algorithm starts with a fundamental description of the desired outcome — say, an airline’s timetable that is optimized for fuel savings and passenger convenience.

Human-Centered Machine Learning

A fantastic overview of design process for products that use machine learning from Josh Lovejoy and Jess Holbrook. Here's a case study that applies these principles to Google Clips camera.

Google PAIR

People+AI research initiative. How might we make it easier for engineers to build and understand machine learning systems? How can AI aid and augment professionals in their jobs? Can design thinking open up new applications for AI?

What is Generative Art?

A fantastic article by Amy Goodchild about the nature of generative art. She digs into three key pillars: randomness, rules, and natural systems.

Applications Of Machine Learning For Designers

Lassi Liikkanen shows 3 key applications: detection, prediction, and generation.

The Best Machine Learning Courses

Class Central analyzed online courses on machine learning and selected the best.

Machine Learning for Creativity and Design

This one-day workshop explores several issues in the domain of generative models for creativity and design. 32 papers were published.

Can Users Control and Understand a UI Driven by Machine Learning?

In a study of people interacting with systems built on machine-learning algorithms, users had weak mental models and difficulties making the UI do what they want.

Microsoft Guidelines for Human-AI Interaction (PDF)

These 18 guidelines can help you design AI systems and features that are more human-centered. How to use them in a creative process.

AI is Your New Design Material

A talk by Josh Clark on using machine-generated content, insight, and interaction as design material in your everyday work.

AI Meets Design

A framework for designers who work with products based on AI, by Nadia Pret. It contains patterns and methods to design them.

Does Artificial Intelligence Mean Data Visualization is Dead?

A great discussion about the relationship between AI and data visualization (more on it). Will we need a speedometer to visualize how fast a car is going when it’s driving itself?

Artificial Intelligence & Humanity

Dan Mall confirms the common logic of an algorithm-driven design process and shows how modern tools help here.

AI: First New UI Paradigm in 60 Years

Jakob Nielsen believes AI is introducing the third user-interface paradigm in computing history, shifting to a new interaction mechanism where users tell the computer what they want, not how to do it — thus reversing the locus of control.

Smarter Patterns

A pattern gallery for algorithm-driven designs. How to make them transparent and predictable.

What is the role of an AI designer at Meta?

How AI designers are bridging the gap between user needs and technological capabilities at Meta.

Generative Design is Doomed to Fail

A necrologue for generative design by Daniel Davis. While it’s trivial to show that generative design is possible, it’s much harder to take the next step and show that generative design is useful.

The Future Of Design — Human-Powered Or AI-Driven?

Keima Kai looks at a typical website creation process and thinks about how each stage could be improved with algorithm-driven design.

UX principles for AI art tools like DALL·E

Hannah Johnston compares principles of popular prompt-based image generators like DALL·E, Midjourney & Google Collab models. How they work and what artists think about it.

The AI boom is creating a new logo trend — the swirling hexagon

Why too many logos of algorithm-driven tools are based on the swirling hexagon. The same thing happens to sparkle icons.

AI-Powered Tools for UX Research: Issues and Limitations

AI-powered UX research insight generation tools have many problems: lack of context, extremely vague summaries and recommendations, inability to analyze image & video content, lack of citation and validation.

2. Ethics

There is no difference between computer art and human art

Oliver Roeder says that “computer art” isn’t any more provocative than “paint art” or “piano art.” The algorithmic software is written by humans, after all, using theories thought up by humans, using a computer built by humans, using specifications written by humans, using materials gathered by humans, in a company staffed by humans, using tools built by humans, and so on. Computer art is human art — a subset, rather than a distinction.Robert Hart looks for legal precedents.

Where machines could replace humans — and where they can’t (yet)

McKinsey analyzed 800 jobs to find how easily they can be automated. Lots of interesting insights. Here's a website to check your job. It started to happen with creative jobs in 2023 (see examples: IBM, Axel Springer, Bluefocus Intelligent Communications Group Co.).

Design Machines

Travis Gertz shows that many websites already look the same. It happened before algorithms even for professional design agencies. The article is incredibly insightful if you want to understand the reasons of this homogenisation.

AI and the future of design: Will machines take your job?

A good article series by Rob Girling from Artefact. He looks at skills that can be automated and tries to predict the future of design as s profession.

Automation of Design: History

Lukasz Lysakowski digs through a history of design automation from book printing to modern days.

The State Of Advanced Website Builders

Drew Thomas tries to find a niche for design agencies in the world of cheap websites built by advanced online builders.

In the AI Age, “Being Smart” Will Mean Something Completely Different

Ed Hess says that the new smart will be determined not by what or how you know but by the quality of your thinking, listening, relating, collaborating, and learning.

How to recognize exclusion in AI

Microsoft inclusive design team defined five biases for AI: dataset, association, automation, interaction, and confirmation.

AI is Killing Our Grasp on Reality

Sandra Upson writes about a new way to produce audio, video, or text that resembles the real world. It could lead to fake reviews and even events.

Awful AI

David Dao made a curated list to track current scary usages of AI — hoping to raise awareness to its misuses in society.

Adobe is using AI to catch Photoshopped images

The AI looks for three types of manipulation: cloning, splicing and removal. See also their new research and a checklist by Kyle McDonald.

Everyday Ethics of AI

IBM designers created a practical guide for designers & developers for building and using AI systems.

AI is bringing out the art world’s worst instincts

AI is like photography in the 19th century–struggling to be accepted as its own art form. Aaron Hertzmann discusses tricky situations in art world.

Invasive Diffusion: How one unwilling illustrator found herself turned into an AI model

A creepy story of Hollie Mengert — her commercial illustrations were used to train a neural network and let anyone to literally clone the style. Andy Bayo contacted code author Ogbogu Kalu. Comments are disgusting — people just don't respect Hollie's work.

The AI Art Apocalypse

Brilliant thinking by Alexander Wales on how tools like DALL-E and Midjourney influence professional illustrators and artists. It'll kick the economy of these professions in the stomach for sure. However, art as self-expression will stay for sure. Erik Hoel has a similar take.

Have I Been Trained?

Artists can search machine learning databases for links to their work and flag them for removal. It's a part of a bigger Spawning initiative — they're building tools for artist ownership of their training data, allowing them to opt into or opt out of the training of large AI models, set permissions on how their style and likeness is used, and offer their own models to the public.

Stable Attribution

A fantastic initiative — it shows source images in the training data of models like Stable Diffusion that led to an image you've just generated.

The Real Story Behind Microsoft’s Quietly Brilliant AI Design

The company studied personal assistants — human ones — to understand how to make a great machine assistant for PowerPoint Designer feature (it makes good slide design for users).

AI UX: 7 Principles of Designing Good AI Products

Dávid Pásztor aims to create useful, easy-to-understand products in order to bring clarity to this shady new world of machine learning. Most importantly, we want to use the power of AI to make people’s lives easier and more joyful.

Untold AI

Christopher Noessel analyzed sci-fi movies to understand how they portray AI. He published a comparison table.

AI is sleepwalking us into surveillance

Arvind Sanjeev shows how private user data leaks into machine learning data — from medical records and smart home photos to real faces.

Mark Coeckelbergh — AI Ethics

An accessible synthesis of ethical issues raised by artificial intelligence that moves beyond hype and nightmare scenarios to address concrete questions.

US patent office rules that artificial intelligence cannot be a legal inventor

Under current law, only natural persons may be named as an inventor in a patent application.

Diffusion Bias Explorer

This experiment shows how words like "assertive" and "gentle" are mapped to stereotypes and biases in models like Stable Diffusion and DALL-E 2 (review). Bloomberg has a good long-read article about this problem.

When Machines Change Art

Aaron Hertzmann draws interesting parallels between today's algorithm-driven design tool boom and other branches of arts and culture for past centuries. He thinks current state is just interim and shows on-spot analogies.

OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic

Yet another sad story shows that futuristic AI-based products are backed by sweat & pain of low-waged human workers.

What if AI tools paid artists?

Fawzi Ammache explores solutions and challenges of paying artists for their contributions to train AI models.

The End of Front-End Development

Josh W Comeau thinks about increasingly-impressive demos from tools like GPT-4, and thinks if front-end developers should worry that by the time they're fluent in HTML/CSS/JS, there won't be any jobs left for them. He disagrees with that and believes it's similar to no-code website builders which exist since 1996. New algorithm-driven tools will make developmers more productive. Christian Heilmann has similar thoughts.

An Exoskeleton For Designers

I've covered several examples of algorithm-driven design in practice. What tools do modern designers need for this? If we look back to the middle of the last century, computers were envisioned as a way to extend human capabilities. Roelof Pieters and Samim Winiger have analyzed computing history and the idea of augmentation of human ability in detail. They see three levels of maturity for design tools:

First-generation systems mimic analogue tools with digital means.
The second generation is assisted creation systems, where humans and machines negotiate the creative process through tight action-feedback loops.
The third generation is assisted creation systems 3.0, which negotiate the creative process in fine-grained conversations, augment creative capabilities and accelerate the acquisition of skills from novice to expert.

Algorithm-driven design should be something like an exoskeleton for product designers — increasing the number and depth of decisions we can get through. How might designers and computers collaborate?
The working process of digital product designers could potentially look like this:

Explore a problem space, and pick the most valuable problem for the business and users to solve
Explore a solution space, and pick the best solution to fix the problem
Develop, launch and market a product that solves this problem
Analysis + Synthesis
Evaluate how the product works for real users, and optimize it
Connect and unify the solution with other products and solutions of the company

These tasks are of two types: the analysis of implicitly expressed information and already working solutions, and the synthesis of requirements and solutions for them. Which tools and working methods do we need for each of them?


Analysis of implicitly expressed information about users that can be studied with qualitative research is hard to automate. However, exploring the usage patterns of users of existing products is a suitable task. We could extract behavioral patterns and audience segments, and then optimize the UX for them. It's already happening in ad targeting, where algorithms can cluster a user using implicit and explicit behavior patterns (within either a particular product or an ad network).


To train algorithms to optimize interfaces and content for these user clusters, designers should look into machine learning. Jon Bruner gives a good example: A genetic algorithm starts with a fundamental description of the desired outcome — say, an airline's timetable that is optimized for fuel savings and passenger convenience. It adds in the various constraints: the number of planes the airline owns, the airports it operates in, and the number of seats on each plane. It loads what you might think of as independent variables: details on thousands of flights from an existing timetable, or perhaps randomly generated dummy information. Over thousands, millions or billions of iterations, the timetable gradually improves to become more efficient and more convenient. The algorithm also gains an understanding of how each element of the timetable — the take-off time of Flight 37 from O'Hare, for instance — affects the dependent variables of fuel efficiency and passenger convenience.

In this scenario, humans curate an algorithm and can add or remove limitations and variables. The results can be tested and refined with experiments on real users. With a constant feedback loop, the algorithm improves the UX, too. Although the complexity of this work suggests that analysts will be doing it, designers should be aware of the basic principles of machine learning. O'Reilly published a great mini-book on the topic recently.


Several years ago, a tool for industrial designers named Autodesk Dreamcatcher made a lot of noise and prompted several publications from UX gurus. It's based on the idea of generative design, which has been used in performance, industrial design, fashion and architecture for many years now. Many of you know Zaha Hadid Architects; its office calls this approach "parametric design."

However, it’s not yet established in digital product design, because it doesn’t help to solve utilitarian tasks. Of course, the work of architects and industrial designers has enough limitations and specificities of its own, but user interfaces aren’t static — their usage patterns, content and features change over time, often many times. However, if we consider the overall generative process — a designer defines rules, which are used by an algorithm to create the final object — there’s a lot of inspiration. The working process of digital product designers could potentially look like this:

An algorithm generates many variations of a design using predefined rules and patterns.
The results are filtered based on design quality and task requirements.
Designers and managers choose the most interesting and adequate variations, polishing them if needed.
A design system runs A/B tests for one or several variations, and then humans choose the most effective of them.

It's yet unknown how can we filter a huge number of concepts in digital product design, in which usage scenarios are so varied. If algorithms could also help to filter generated objects, our job would be even more productive and creative. However, as product designers, we use generative design every day in brainstorming sessions where we propose dozens of ideas, or when we iterate on screen mockups and prototypes. Why can't we offload a part of these activities to algorithms?


Remove the routine of preparing assets and content, which is more or less mechanical work.
Experiment with different parts of a user interface or particular patterns — ideally, automatically.
Broaden creative exploration, where a computer makes combinations of variables, while the designer filters results to find the best variations.
Experiment with different parts of a user interface or particular patterns — ideally, automatically.
Quickly adapt a design to various platforms and devices, though in a primitive way.
Altogether, this frees the designer from the routines of both development support and the creative process, but core decisions are still made by them. A neat side effect is that we will better understand our work, because we will be analyzing it in an attempt to automate parts of it. It will make us more productive and will enable us to better explain the essence of our work to non-designers. As a result, the overall design culture within a company will grow.


We can only talk about a company's custom solutions in the context of the company's own tasks. The work requires constant investment into development, support and enhancement.
Breaking past existing styles and solutions becomes harder. Algorithm-driven design is based on existing patterns and rules.
Copying another designer's work becomes easier if a generative design tool can dig through Dribbble.
As The Grid's CMS shows, a tool alone can't do miracles. Without a designer at the helm, its results will usually be mediocre. On the other hand, that's true of most professional tools.
There are also ethical questions: Is design produced by an algorithm valuable and distinct? Who is the author of the design? Wouldn't generative results be limited by a local maximum? Oliver Roeder says that "computer art" isn't any more provocative than "paint art" or "piano art." The algorithmic software is written by humans, after all, using theories thought up by humans, using a computer built by humans, using specifications written by humans, using materials gathered by humans, in a company staffed by humans, using tools built by humans, and so on. Computer art is human art — a subset, rather than a distinction. The revolution is already happening, so why don't we lead it?


This is a story of a beautiful future, but we should remember the limits of algorithms — they're built on rules defined by humans, even if the rules are being supercharged now with machine learning. The power of the designer is that they can make and break rules; so, in a year from now, we might define "beautiful" as something totally different. Our industry has both high- and low-skilled designers, and it will be easy for algorithms to replace the latter. However, those who can follow and break rules when necessary will find magical new tools and possibilities.

Moreover, digital products are getting more and more complex: We need to support more platforms, tweak usage scenarios for more user segments, and hypothesize more. As Frog's Harry West says, human-centered design has expanded from the design of objects (industrial design) to the design of experiences (encompassing interaction design, visual design and the design of spaces). The next step will be the design of system behavior: the design of the algorithms that determine the behavior of automated or intelligent systems. Rather than hire more and more designers, offload routine tasks to a computer. Let it play with the fonts.