It's more like an exoskeleton for designers. Algorithm-driven design tools can help us to construct a UI, prepare assets and content, and personalize the user experience. I published an article about it on Smashing Magazine in January, 2017. I’ve been following the idea of algorithm-driven design since 2012 and have collected some practical examples. This website has all of them. In 2016, the technological foundations of these tools became easily accessible, and the design community got interested in algorithms, neural networks, machine learning, and artificial intelligence (AI). Now is the time to rethink the modern role of the designer.
List of contents
Constructing a UI ■ Preparing Assets and Content ■ Personalizing UX ■ Graphic Design ■ Other Disciplines ■ More to Read
The idea to fully replace a designer with an algorithm sounds futuristic, but the whole point is wrong. Product designers help to translate a raw product idea into a well-thought-out user interface, with solid interaction principles and a sound information architecture and visual style, while helping a company to achieve its business goals and strengthen its brand.
Designers make a lot of big and small decisions; many of them are hardly described by clear processes. Moreover, incoming requirements are not 100% clear and consistent, so designers help product managers solve these collisions — making for a better product. It’s much more than about choosing a suitable template and filling it with content.
However, if we talk about creative collaboration, when designers work “in pair” with algorithms to solve product tasks, we see a lot of good examples and clear potential. It’s especially interesting how algorithms can improve our day-to-day work on websites and mobile apps.
Album covers processed through Prisma and Glitché (Hover to see how the images transform)
Designers have learned to juggle many tools and skills to near perfection, and as a result, a new term emerged, “product designer.” Product designers are proactive members of a product team; they understand how user research works, they can do interaction design and information architecture, they can create a visual style, enliven it with motion design, and make simple changes in the code for it. These people are invaluable to any product team.
However, balancing so many skills is hard — you can’t dedicate enough time to every aspect of product work. Of course, a recent boon of new design tools has shortened the time we need to create deliverables and has expanded our capabilities. However, it’s still not enough. There is still too much routine, and new responsibilities eat up all of the time we’ve saved. We need to automate and simplify our work processes even more. I collected many use cases for this.
Constructing a UI
Publishing tools such as Webflow, Readymag and Squarespace have already simplified the author’s work — countless high-quality templates will give the author a pretty design without having to pay for a designer. There is an opportunity to make these templates smarter, so that the barrier to entry gets even lower.
It chooses templates and content-presentation styles, and it retouches and crops photos — all by itself. Moreover, the system runs A/B tests to choose the most suitable pattern.
Looks similar to The Grid’s semi-automated way of enabling non-professionals to create a website. Wix teaches the algorithm by feeding it many examples of high-quality modern websites. Moreover, it tries to make style suggestions relevant to the client’s industry.
The tool expanded to algorithm-driven design space (how it works). It can generate a screen or a series of screens via text prompt. You can alter visual style for each part of these screens.
It analyzes an existing website and rebuilds in on its own technology with a proposed new design. It's a good way for SMBs who are not designers to make their websites better. More about it.
An automated magazine layout system. A script parses an article. Then, depending on the article’s content (the number of paragraphs and words in each, the number of photos and their formats, the presence of inserts with quotes and tables, etc.), the script chooses the most suitable pattern to present this part of the article.
A home page generator using similar ideas. The algorithm finds every possible layout that is valid, combining different examples from a pattern library.
Next, each layout is examined and scored based on certain traits. Finally, the generator selects the “best” layout — basically, the one with the highest score.
This tool combines lots of great concepts: transform sketches & screenshots into editable design, apply a style from screenshot to mockups, and generate user interface screens via text prompt. It also has common design and prototyping features. GPT-4 can do it too.
An experiment by Adobe and University of Toronto. The tool automatically refines a design layout for you. It can also propose an entirely new composition. Similar ideas can be found in Sketchplore, MS PowerPoint, and Google Slides.
The best example of an algorithm-driven design tool applied to UIs by Janne Aukia. It generates stylistic variations of a dashboard and even tries layout changes.
An experiment by Hayk An allows tweaking of a user interface based on tokens. You can get a random combination of parameters, so it's more like a toy. See its first version.
Algorithm-Driven Design enthusiast Jack Qiao launched an all-in-one tool which lets you to design a logo, simple brand identity, and a UI kit in with that visual style in code. He describes how logo and font generator works. Check logo comparison and crunch tools too. Automagic Design does the same.
Creating cookie-cutter graphic assets in many variations is one of the most boring parts of a designer’s work. It takes so much time and is demotivating, when designers could be spending this time on more valuable product work.
A script crops movie characters for posters, then applies a stylized and localized movie title, then runs automatic experiments on a subset of users. Real magic! A new version started to personalize a poster image for different users (i.e., to show a particular actor or just a mood).
A promotional image generator for e-commerce product lists. A marketer fills a simple form with a title and an image, and then the generator proposes an endless number of variations, all of which conform to design guidelines.
The algorithm pulled from a database of dozens of patterns and colours to create seven million different versions of Nutella's graphic identity, which have been splashed across the front of jars in Italy.
One-click intelligent design image generation, intelligent layout, size expansion, color expansion, and other design services. Users can generate multiple sets of design solutions that meet the requirements in real time by simply inputting the desired style and size. More about it.
A simple design tool from Microsoft. It generates images with DALL-E 2 and mockup layouts by text prompt. How it works. They have several experimental tools like that (e.g. Bing Image Creator).
Samim Winiger tries to re-imagine word-processing software. It explores new forms of writing, that allow authors to shift their focus from creation to curation, and write more joyfully.
Algorithm-driven writing assistant. It learns company's tone of voice and helps to generate ideas & drafts, edit human-made text, and prepare versions for different distribution channels. There are lots of tools like that now: Jasper, Copy.ai, CopyMonkey, AI SEO, and many more.
The add-on analyzes LinkedIn to show relevant CV examples, identify top skills, customize a resume based on real job postings, and get additional help.
An experimental projects turns sketches into icons. It can help non-designers to use quality icons in their mockups. Researches turned it other way around — algorithms makes human-like sketches.
Yuqian Zhou & Bertram Emil Shi generated various facial expressions from a single photo. It's a complex problem because we still need to identify a user in these states. HyperStyle can alter the age.
This bot generates images from caption-like text descriptions. It can be everything from ordinary pastoral scenes, such as grazing livestock, to the absurd, such as a floating double-decker bus.
The mobile app transforms videos into single-page comic layouts. It automatically selects interesting video frames, lays them out, and applies one of six visual styles.
The product team added buildings and areas of interest even for smallest cities using satellite and street views photos. Their 3D models are so detailed that you can sometimes see the blades inside the rooftop fans. Looks like Apple does it manually.
An algorithm by Tom White draws abstract illustrations of real world objects. It's trained on photos and the result is close to usable in real products. It's a part of Google Artists and Machine Learning initiative (read its blog).
An experimental tool by Shaobo Guan generates realistic photos of people. You can alter gender, age, race, and some facial details. StyleGAN from Nvidia is one of many similar tools. There's even "Hot or Not"!
A simple way to stylize avatars in one of popular illustration styles. It became possible with DreamBooth technique from Google. Lots of similar tools like Avatar AI & AI Profile Picture Generator exist.
It helps to generate 3D objects for virtual worlds. You can get a whole series of cars, animals, furniture, or other items that are varied, but still a part of a family.
Parametric typography based on the idea of interpolation from several key variables: weight, width, and optical size. In 2016 it became a part of OpenType format specification. Previously, it was only possible to use variable fonts on the web through hacks or via desktop tools like Robofont.
Another Sensei experiment turns any letter image into a glyph, then creates a complete alphabet and font out of it. It can also apply the result to a physical object via augmented reality.
This tool changes style of the whole video. You can describe new style via text prompt.
Personalizing UX
One way to get a clear and well-developed strategy is to personalize a product for a narrow audience segment or even specific users. We see it every day in Facebook newsfeeds, Google search results, Netflix and Spotify recommendations, and many other products. Besides the fact that it relieves the burden of filtering information from users, the users’ connection to the brand becomes more emotional when the product seems to care so much about them.
It takes a broader view of UX personalization and anticipation of user wishes. We already have these types of things on our phones: Google Now and Siri automatically propose a way home from work using location history data However, the key factor here is trust. To execute anticipatory experiences, people have to give large companies permission to gather personal usage data in the
The team learned how to answer the question, “What will the booked price of a listing be on any given day in the future?” so that its hosts could set competitive prices.
Advice to designers about how to continue being useful in this new era and how to use various data sources to build and teach algorithms. The only element of classic UX design in Spotify’s Discover Weekly feature is the track list, whereas the distinctive work is done by a recommendation system that fills this design template with valuable music.
A well-though-out model of adaptive interfaces that considers many variables to fit particular users by Liam Spradlin. Here's another application of this idea by researchers from Aalto & Kochi Universities.
This engine personalizes user interface elements based on user browsing history and preferences. I.e., a product card in e-commerce could highlight different information. More about criteria it takes into account, analysis process, and prototyping.
Graphic Design
There are great examples ready-made algorithm-driven design tools for classic graphic design: identity, typography, drawing, illustrations. They show that it's possible to apply similar ideas to create a UI.
A product to replace freelancers for a simple logo design. You choose favorite styles, pick a color and voila, Logojoy generates endless ideas. You can refine a particular logo, see an example of a corporate style based on it, and order a branding package with business cards, envelopes, etc. It’s the perfect example of an algorithm-driven design tool in the real world! Dawson Whitfield, the founder, described machine learning principles behind it. Logoshuffle and My Brand New Logo are similar tools. Even Fiverr launched their own tool, but it's ethically questionable.
Wolff Olins presented a live identity for Brazilian telecom Oi, which reacts to sound. You just can’t create crazy stuff like this without some creative collaboration with algorithms.
An algorithm can create 40,000 logo shapes in 12 different color combinations, providing the Media Lab an estimated 25 years’ worth of personalized business cards. However, they struggled to use that in real life and simplified it later.
Pentagram developed a shape generator which allows Graphcore’s internal team to create infinite patterns that illustrate their website content, presentations and more. The generator is part-random and part-weighted and is similar to the system developed for Graphcore’s animations, which are used across digital touchpoints. How to recreate it with JavaScript.
An interactive algorithmic installation for Puerto Rico national identity by Muuaaa design agency. They defined 45 culture markers and let people to construct their own flags.
2016 release of Photoshop has a content-aware feature that intelligently fills in the gaps when you use the cropping tool to rotate an image or expand the canvas beyond the image’s original size.
It allows to select any part of an image and add any object here or just automatically extend the whole scene (how it works and what it can do). It's one of the best implementations of algorithm-driven design features in a design tool — it's built into existing workflow, it's not a separate mode. Other tools like ClipDrop Uncrop caught up quick to do the same thing.
It can change an object on a photo with precise control over where pixels go, thus manipulating the pose, shape, expression, and layout of diverse categories such as animals, cars, humans, landscapes, etc.
Photoshop added another content-aware feature — it replaces a whole part of a photo with a relevant piece from Adobe Stock collection. No need for intense retouching anymore. Another experiment is Project Cloak which replaces objects on videos. See more MAX 2017 announces based on Adobe Sensei platform (Puppetron and PhysicsPak are the best).
Researchers from NVIDIA introduced a deep learning method that can edit images or reconstruct a corrupted image, one that has holes or is missing pixels.
A simple graphic design tool supercharged by Adobe Firefly algorithm-driven platform. It can generate illustrations and typefaces for mockups and videos, as well as resize them automatically.
To date, most digital image and video editing tools have been canvas-centric. Advances in artificial intelligence, however, have started a shift toward more object-centric workflows. Luke Wroblewski several examples of this transition.
Tyler Hobbs talks about generative art movement in the NFT community. They use artistic algorithms (although it leads to the ocean of sameness). He mentions Artblocks platform and his own Fidenza project.
A smart platform that uses Adobe’s deep expertise in AI and machine learning, and it will be the foundation for future algorithm-driven design features in Adobe’s consumer and enterprise products: semantic image segmentation, font recognition, and intelligent audience segmentation. Scott Prevost sees 3 ways to apply it to designers workflow.
Other Disciplines
There a lot of examples based on the idea of generative design, which has been used in performances, industrial design, fashion, architecture, music, and games for many years now. I'll highlight some of them, but you better visit websites like Creative AI to see more.
1. An algorithm generates many variations of a design using predefined rules and patterns. 2. The results are filtered based on design quality and task requirements. 3. Designers choose the most interesting and adequate variations, polishing them if needed.
It's based on a common algorithm-driven design process, made in a proper way. Designers provide initial design sketches and engineering constraints. Then an experimental tool proposes several options in a refined way. After that, designers finalize the best version.
Nearly all elements of the game are procedurally generated, including star systems, planets and their ecosystems, flora, fauna and their behavioural patterns, artificial structures, and alien factions and their spacecraft.
It automatically selects the most appropriate clip from one of the input takes, for each line of dialogue, based on a user-specified set of film-editing idioms. A final cut is a good draft for an editor.
They have the likeness of some of Xinhua's human anchors, but their voices, facial expressions and mouth movements are synthesized and animated using deep learning techniques.
Google's experiment generates music in various genres by text description. However, it's not publicly available, as they're cautious of plagiarism. Although Meta has opened their MusicGen tool.
Users can directly sing or hum a melody into the app, after which it will use machine learning to expand the melody and turn it into an instrumental song.
It helps to autocomplete code. It's built into popular IDEs like Visual Studio Code and JetBrains; you can voice-control it too. They analyzed GitHub repositories for training (however, it's controversial to sell open source work results — which led to a class-action). CopilotX is even more powerful.
Microsoft 365 apps like Word, Excel, PowerPoint, Outlook & Teams can generate and analyze documents. It's a great productivity boost. Google Docs goes a similar path. Microsoft pushes this concept to other major and minor products, starting from Windows itself.
The assistant is built into key products like Jira and Confluence. It can write meeting notes, search via human-like phrase and summarize results, co-write tasks, etc (see the overview).
Rama Allen says that human/AI collaboration is an aesthetic dialogue similar to that employed with improvisational jazz. It's a great thinking on what modern art could be.
What to Read
Several years ago, the hottest discussion in the design community was "should designers code?". Now things became even more complex. There are several starting points for you, including my article on Smashing Magazine (all these links are in it).
O’Reilly published a great mini-book by Patrick Hebron with machine learning basics and design examples. He also has a great vision about new design tools.
Jon Bruner gives a good example: A genetic algorithm starts with a fundamental description of the desired outcome — say, an airline’s timetable that is optimized for fuel savings and passenger convenience.
People+AI research initiative. How might we make it easier for engineers to build and understand machine learning systems? How can AI aid and augment professionals in their jobs? Can design thinking open up new applications for AI?
In a study of people interacting with systems built on machine-learning algorithms, users had weak mental models and difficulties making the UI do what they want.
A great discussion about the relationship between AI and data visualization (more on it). Will we need a speedometer to visualize how fast a car is going when it’s driving itself?
Jakob Nielsen believes AI is introducing the third user-interface paradigm in computing history, shifting to a new interaction mechanism where users tell the computer what they want, not how to do it — thus reversing the locus of control.
A necrologue for generative design by Daniel Davis. While it’s trivial to show that generative design is possible, it’s much harder to take the next step and show that generative design is useful.
Hannah Johnston compares principles of popular prompt-based image generators like DALL·E, Midjourney & Google Collab models. How they work and what artists think about it.
AI-powered UX research insight generation tools have many problems: lack of context, extremely vague summaries and recommendations, inability to analyze image & video content, lack of citation and validation.
Oliver Roeder says that “computer art” isn’t any more provocative than “paint art” or “piano art.” The algorithmic software is written by humans, after all, using theories thought up by humans, using a computer built by humans, using specifications written by humans, using materials gathered by humans, in a company staffed by humans, using tools built by humans, and so on. Computer art is human art — a subset, rather than a distinction.Robert Hart looks for legal precedents.
Travis Gertz shows that many websites already look the same. It happened before algorithms even for professional design agencies. The article is incredibly insightful if you want to understand the reasons of this homogenisation.
A good article series by Rob Girling from Artefact. He looks at skills that can be automated and tries to predict the future of design as s profession.
Ed Hess says that the new smart will be determined not by what or how you know but by the quality of your thinking, listening, relating, collaborating, and learning.
A creepy story of Hollie Mengert — her commercial illustrations were used to train a neural network and let anyone to literally clone the style. Andy Bayo contacted code author Ogbogu Kalu. Comments are disgusting — people just don't respect Hollie's work.
Brilliant thinking by Alexander Wales on how tools like DALL-E and Midjourney influence professional illustrators and artists. It'll kick the economy of these professions in the stomach for sure. However, art as self-expression will stay for sure. Erik Hoel has a similar take.
Artists can search machine learning databases for links to their work and flag them for removal. It's a part of a bigger Spawning initiative — they're building tools for artist ownership of their training data, allowing them to opt into or opt out of the training of large AI models, set permissions on how their style and likeness is used, and offer their own models to the public.
The company studied personal assistants — human ones — to understand how to make a great machine assistant for PowerPoint Designer feature (it makes good slide design for users).
Dávid Pásztor aims to create useful, easy-to-understand products in order to bring clarity to this shady new world of machine learning. Most importantly, we want to use the power of AI to make people’s lives easier and more joyful.
An accessible synthesis of ethical issues raised by artificial intelligence that moves beyond hype and nightmare scenarios to address concrete questions.
This experiment shows how words like "assertive" and "gentle" are mapped to stereotypes and biases in models like Stable Diffusion and DALL-E 2 (review). Bloomberg has a good long-read article about this problem.
Aaron Hertzmann draws interesting parallels between today's algorithm-driven design tool boom and other branches of arts and culture for past centuries. He thinks current state is just interim and shows on-spot analogies.
Josh W Comeau thinks about increasingly-impressive demos from tools like GPT-4, and thinks if front-end developers should worry that by the time they're fluent in HTML/CSS/JS, there won't be any jobs left for them. He disagrees with that and believes it's similar to no-code website builders which exist since 1996. New algorithm-driven tools will make developmers more productive. Christian Heilmann has similar thoughts.
An Exoskeleton For Designers
I've covered several examples of algorithm-driven design in practice. What tools do modern designers need for this? If we look back to the middle of the last century, computers were envisioned as a way to extend human capabilities. Roelof Pieters and Samim Winiger have analyzed computing history and the idea of augmentation of human ability in detail. They see three levels of maturity for design tools:
First-generation systems mimic analogue tools with digital means.
The second generation is assisted creation systems, where humans and machines negotiate the creative process through tight action-feedback loops.
The third generation is assisted creation systems 3.0, which negotiate the creative process in fine-grained conversations, augment creative capabilities and accelerate the acquisition of skills from novice to expert.
Algorithm-driven design should be something like an exoskeleton for product designers — increasing the number and depth of decisions we can get through. How might designers and computers collaborate? The working process of digital product designers could potentially look like this:
Analysis
Explore a problem space, and pick the most valuable problem for the business and users to solve
Analysis
Explore a solution space, and pick the best solution to fix the problem
Synthesis
Develop, launch and market a product that solves this problem
Analysis + Synthesis
Evaluate how the product works for real users, and optimize it
Synthesis
Connect and unify the solution with other products and solutions of the company
These tasks are of two types: the analysis of implicitly expressed information and already working solutions, and the synthesis of requirements and solutions for them. Which tools and working methods do we need for each of them?
Analysis
Analysis of implicitly expressed information about users that can be studied with qualitative research is hard to automate. However, exploring the usage patterns of users of existing products is a suitable task. We could extract behavioral patterns and audience segments, and then optimize the UX for them. It's already happening in ad targeting, where algorithms can cluster a user using implicit and explicit behavior patterns (within either a particular product or an ad network).
Inspire
Generate
Explore
Fabricate
To train algorithms to optimize interfaces and content for these user clusters, designers should look into machine learning. Jon Bruner gives a good example: A genetic algorithm starts with a fundamental description of the desired outcome — say, an airline's timetable that is optimized for fuel savings and passenger convenience. It adds in the various constraints: the number of planes the airline owns, the airports it operates in, and the number of seats on each plane. It loads what you might think of as independent variables: details on thousands of flights from an existing timetable, or perhaps randomly generated dummy information. Over thousands, millions or billions of iterations, the timetable gradually improves to become more efficient and more convenient. The algorithm also gains an understanding of how each element of the timetable — the take-off time of Flight 37 from O'Hare, for instance — affects the dependent variables of fuel efficiency and passenger convenience.
In this scenario, humans curate an algorithm and can add or remove limitations and variables. The results can be tested and refined with experiments on real users. With a constant feedback loop, the algorithm improves the UX, too. Although the complexity of this work suggests that analysts will be doing it, designers should be aware of the basic principles of machine learning. O'Reilly published a great mini-book on the topic recently.
Synthesis
Several years ago, a tool for industrial designers named Autodesk Dreamcatcher made a lot of noise and prompted several publications from UX gurus. It's based on the idea of generative design, which has been used in performance, industrial design, fashion and architecture for many years now. Many of you know Zaha Hadid Architects; its office calls this approach "parametric design."
However, it’s not yet established in digital product design, because it doesn’t help to solve utilitarian tasks. Of course, the work of architects and industrial designers has enough limitations and specificities of its own, but user interfaces aren’t static — their usage patterns, content and features change over time, often many times. However, if we consider the overall generative process — a designer defines rules, which are used by an algorithm to create the final object — there’s a lot of inspiration. The working process of digital product designers could potentially look like this:
An algorithm generates many variations of a design using predefined rules and patterns.
The results are filtered based on design quality and task requirements.
Designers and managers choose the most interesting and adequate variations, polishing them if needed.
A design system runs A/B tests for one or several variations, and then humans choose the most effective of them.
It's yet unknown how can we filter a huge number of concepts in digital product design, in which usage scenarios are so varied. If algorithms could also help to filter generated objects, our job would be even more productive and creative. However, as product designers, we use generative design every day in brainstorming sessions where we propose dozens of ideas, or when we iterate on screen mockups and prototypes. Why can't we offload a part of these activities to algorithms?
Pros
Remove the routine of preparing assets and content, which is more or less mechanical work.
Experiment with different parts of a user interface or particular patterns — ideally, automatically.
Broaden creative exploration, where a computer makes combinations of variables, while the designer filters results to find the best variations.
Experiment with different parts of a user interface or particular patterns — ideally, automatically.
Quickly adapt a design to various platforms and devices, though in a primitive way.
Altogether, this frees the designer from the routines of both development support and the creative process, but core decisions are still made by them. A neat side effect is that we will better understand our work, because we will be analyzing it in an attempt to automate parts of it. It will make us more productive and will enable us to better explain the essence of our work to non-designers. As a result, the overall design culture within a company will grow.
Cons
We can only talk about a company's custom solutions in the context of the company's own tasks. The work requires constant investment into development, support and enhancement.
Breaking past existing styles and solutions becomes harder. Algorithm-driven design is based on existing patterns and rules.
Copying another designer's work becomes easier if a generative design tool can dig through Dribbble.
As The Grid's CMS shows, a tool alone can't do miracles. Without a designer at the helm, its results will usually be mediocre. On the other hand, that's true of most professional tools.
There are also ethical questions: Is design produced by an algorithm valuable and distinct? Who is the author of the design? Wouldn't generative results be limited by a local maximum? Oliver Roeder says that "computer art" isn't any more provocative than "paint art" or "piano art." The algorithmic software is written by humans, after all, using theories thought up by humans, using a computer built by humans, using specifications written by humans, using materials gathered by humans, in a company staffed by humans, using tools built by humans, and so on. Computer art is human art — a subset, rather than a distinction. The revolution is already happening, so why don't we lead it?
Conclusion
This is a story of a beautiful future, but we should remember the limits of algorithms — they're built on rules defined by humans, even if the rules are being supercharged now with machine learning. The power of the designer is that they can make and break rules; so, in a year from now, we might define "beautiful" as something totally different. Our industry has both high- and low-skilled designers, and it will be easy for algorithms to replace the latter. However, those who can follow and break rules when necessary will find magical new tools and possibilities.
Moreover, digital products are getting more and more complex: We need to support more platforms, tweak usage scenarios for more user segments, and hypothesize more. As Frog's Harry West says, human-centered design has expanded from the design of objects (industrial design) to the design of experiences (encompassing interaction design, visual design and the design of spaces). The next step will be the design of system behavior: the design of the algorithms that determine the behavior of automated or intelligent systems. Rather than hire more and more designers, offload routine tasks to a computer. Let it play with the fonts.