Creativity in the Age of Machines: How AI-Powered Creatives Will Enable a More Beautiful World
What impact will machine learning have on the future of design and creativity?
As a researcher, I’ve always had a strong interest in robots. Today, I see a lot of similarities for how I imagined robots would work and what we expect our intelligent assistants to do. Both will need effective computer vision, voice recognition, and synthesis — but most importantly, interesting things to do and say.
There are lots of similarities between now and a theory about what triggered the Cambrian Explosion. According to that theory, once creatures developed vision that worked, we had a fantastic acceleration of evolution and an emergence of a wide variety of new forms and behaviors. Today, artificial intelligence (AI) is driving a new kind of evolution, especially with advancements in visual perception and its application to everything from drones to artistic creativity.
Visual data are central to the way we learn about the world. AI can push us to be more innovative and help us accomplish many different tasks inside creative apps, including content understanding, smart editing, and powering virtual agents. All of these things will serve as a catalyst to an explosion in the volume and quality of creative work.
At its best, AI distills the wisdom of the crowd, and expresses it as a useful tool. Essentially, it should help us make new art as we learn from each other. It’s easy to see the time we can save and the productivity we can gain from these applications. It’s equally as easy to see how this could keep creatives from getting tied down in tedious tasks so they can have more time for inspiration and doing the thoughtful work of design.
Despite fears that AI will replace the need for creativity, this technology has the potential to make us more productive, but it also will raise expectations that force us to be more original. Here are some ways in which that might happen.
Better content understanding can drive more creativity
How much time do you spend searching for the right images to use in your next design, or trying to find the right combination of keywords to match the tags and descriptions others have applied to images? In the near future, that may be time you get back. AI is already learning how to “view” images, recognize notable elements and layouts, and automatically tag and describe them. Taking this a step further, Concept Canvas, one of our Adobe Research projects, will allow users to search images based on spatial relationships between concepts.
Similarly, automatic captioning is able to use AI to translate images into language that will match the words humans use to search for the same images. These systems are coming up with natural language expressions to describe image content at various levels of detail. Perhaps most impressive of all is that the AI behind these systems is learning not just to create literal descriptions of concrete objects within images, but also to think by analogy.
Naturally, this is immensely helpful not only to visually impaired people consuming documents, but also to anyone looking to find relevant images with greater accuracy and speed.
Smart editing leads to more coherent workflows
Once a designer decides to remove certain elements from (or add elements to) an image or video, the process of making that a reality can be tedious and time-intensive. But smart editing features use a kind of visual common sense learned from data to allow object-level — not just pixel-level — detection and transformation of images.
Recently, we showed off the results of Project Cloak, which lets you remove regions of video and automatically fill-in the background in a coherent way. Then there’s Project Deep Fill, which uses machine learning to recognize how to, under the control of the designer, modify images. A third example — and maybe the most striking — is Project Puppetron, which uses smart editing to apply a distinct artistic style to not only a single image, but to an entire video.
However, multistep applications may just illustrate the real power of smart editing. Multistep applications combine several AI ideas into a coherent workflow. In many cases, these workflows can be executed so fluidly and seamlessly by AI that they seem like a single action.
Multistep applications: SceneStitch
Multistep applications: SkyReplace
With applications like these, it’s not hard to see how AI could become not only the ultimate time-saver for creatives, but a powerful, virtual creative agent.
Powering smarter AI agents
Once we have numerous AI-enabled workflows within the system, one of the challenges is coordinating or controlling them all. The answer to this problem might be the use of agents.
Today, we are able to use agents the same way we use keyboard shortcuts. In our lab, for example, we have an iPad that we can tell to reframe an image or flip it sideways. It’s a standard graphical user interface (GUI) driven by voice instead of a keyboard. However, the creative applications of the future will require much smarter, richer agents.
An example of a smarter agent would be one that can recognize the user’s state of attention — whether they’re in flow, bored, or ready to be interrupted. Today, neural networks exist that can recognize facial expressions and could make this possible, resulting in more polite, people-compatible agents. You might call this Agent EQ.
Of course, when we talk about creative agents that could give us recommendations, we also have to take into account the possibility of artistic differences. We may have ideas that our creative assistants do like, but some ideas they won’t like. At the same time, our creative assistants may believe their suggestions are sound. Imagine being the acclaimed author Lewis Carroll with a spell-checker trying to type in, “’Twas brillig, and the slithy toves.” It would basically get corrected away and would force you to be less original. In this way, AI technology will have to be programmed to avoid incumbent bias and recognize your need to reject its suggestions in favor of originality.
Enabling a more beautiful, creative world
Thanks to AI, the days of being an artist who invents one particular style and then does the same thing for 20 years will be a thing of the past. Sophisticated stylization algorithms, like Puppetron, can apply a reference style to a variety of content automatically. This will lead to a stronger emphasis on novelty — so many creatives will have to be more like Picasso, who reinvented his style throughout his career.
The demand for creativity also will increase. There’s no limit to the need for creative content — but the quality of it is frankly limited by people, by time, and by how creative they can be. So, as AI enables these things to become more spontaneous, we’ll have a larger army of people developing creative work and there will be more demand.
However, there’s one caveat. Yes, AI will increase the quality of creative work across the board, and it will be wonderful. But there also will be people who will use the technology to fake artistry and reality — by, for example, using voice technology to change the meaning or context of what someone said. That’s the downside of any technology, including AI — the possibility of misuse. However, the benefits we reap from innovation far outweigh the potential negative consequences, especially if we put safeguards and governance structures in place to protect against misuse. Beyond technology, the best defense will be a sophisticated public who are on their guard, and who know what is possible, that content can be manipulated — often to delight them, but sometimes to mislead them.
When everyone can attain a certain standard of visual quality, it will force those who are truly gifted to set a new standard of creativity and originality. And I think the world will be even more beautiful and entertaining as a result.
Read more about artificial intelligence in our Human & Machine collection.