Loading...
Intercom built their customer support bot on top of ChatGPT (how it works).
This video streaming tool can imitate human eye contact live.
This experiment shows how words like "assertive" and "gentle" are mapped to stereotypes and biases in models like Stable Diffusion and DALL-E 2 (review). Bloomberg has a good long-read article about this problem.
This tool creates interior design drafts using via text prompt. Finch & PlanFinder can help with floor plans.
They added an option for generative backgrounds for videos. It's a huge audence for the concept.
A simple design tool from Microsoft. It generates images with DALL-E 2 and mockup layouts by text prompt. How it works. They have several experimental tools like that (e.g. Bing Image Creator).
OpenAI's chat-based tool generates various forms of text — general answers, creative copy, code, and many more (this is how it works). Designers discuss professional topics, try usability testing scripts, search research reports, simplify JTBD creation, document design system components, write books, explore interaction ideas, and create Figma plugins. First experiments started with GPT-3 in 2020 like Sharif Shameem's React app, or Figma plugins by Jordan Singer and Dhvanil Patel, or color palettes by Harley Turan. People also generate startup ideas, newspapers, programming languages, and other crazy stuff. Plugins and API gave even more possibilities. Many companies integrate it via Microsoft Azure like Maya 3D tool. Others supercharge existing chat/voice assistants like Duolingo or Mercedes. Some just generate texts like LinkedIn.
Be cautious about the quality, as results are often a word salad. They're often good as drafts that you need to edit and finish yourself (MIT research shows it can increase productivity). Avoid using it for generating strategy or faking user research. Anyway, it's a threat to Google search, as you can get answers right away, not just links (see how Opera does it). Researchers try to detect texts by ChatGPT.
These faces are actually generated, not just tweaked real photos of people. It's a major privacy risk — researchers backtracked photos from a popular face generator to find original people.