Google’s new image generation tool shows how perceptions of AI in creative work are evolving

While this year’s Google I/O conference featured all the pomp and circumstance that surrounded the company’s annual product announcements, a short drive away at a different, quieter venue, the company shared clues as to where its AI strategy is headed. The event announced an experimental creative project from Google Creative Labs, called Infinite Wonderland , as well as a new tool called StyleDrop. The project’s concept was to visually reimagine the story of Alice in Wonderland with the help of four professional designers who trained a Google AI model to render images in their personal artistic styles. The project hints at Google’s ambition to reach creative professionals, and shows how the perceptions of AI tools within the industry itself have been evolving in a positive direction.  Designers Haruko Hayakawa , Eric Hu , Erik Carter , and Shawna X worked with Google Creative Labs over several months to learn how to use Google DeepMind’s Imagen 2 AI generation model and, more specifically, its in-development StyleDrop tool. The tool allows users to fine-tune the model to produce illustrations in their own style by uploading their own work as reference images. They trained StyleDrop in an iterative process with groups of 10 images at a time, but the company says it can be trained with as little as one reference image. [Images: Google] The resulting Infinite Wonderland product, a digital version of Lewis Carroll’s Alice in Wonderland , shows text from the book in the left-hand column of a web page accompanied by an image on the right. The interface allows readers to click on any sentence and change the accompanying image in the style of any of the four designers, or the original illustrator, John Tenniel.  [Images: Google] Some of the images have little AI tells—a wonky eye here and there, for instance—but overall the results are strikingly sophisticated, and look like the authentic work of the artists who rendered them. Of course, that’s both the challenge and promise of new AI tech. As Alice says toward the end of the book, “It’s no use going back to yesterday, because I was a different person then.” And today’s professional creative landscape does look a lot different than yesterday’s. New AI capabilities continue to roll out amid low-level rumblings of the disruption they cause. On the one hand, AI is seen as democratizing to people who don’t have creative skills, and who can now produce their own images. [Image: Google] On the other hand, it’s seen by some as an industry threat. Typically, if you can’t do it yourself, you’d hire someone who can. So would AI tools Canva-tise design as we know it? But it also leaves open questions around copyright law, fakery, and craft. As recently as April, artists filed a class-action copyright lawsuit against Google , claiming it used their work without permission to train Imagen. ( A previous case was dismissed .) At the same time, there’s an evolution of thinking within the creative community as professionals have had more time to experiment with it. There are real use cases for bringing AI into the creative process , and tangible benefits to working with a generative AI “style assistant,” as StyleDrop is described on its GitHub page .  [Image: Google] Hayakawa, a freelance creative director and CG artist who works in a kind of metallic, glamour-meets-retrofuturist style, and who has worked with clients like Bon Appétit magazine, Fly by Jing, and Poppi, had hesitations about AI and hadn’t really used it in her work before this project, she says. Now she sees AI as another tool in her toolbox that enables her to do more within her practice.  “I don’t think I’d ever give up a ton of creative control to be quite honest, but what’s interesting about the tool is that it allows me to still create the work but also have this variant, mass scale that I can do all sorts of things with.” Using an AI tool “opens up a whole new door of possibilities,” she says. “I step up into a different role with my work rather than just purely being an executor.” Matthew Carey, Google Creative Labs group creative director, considers feedback like Hayakawa’s a good sign. “Those are positive signals for us—that this technology could be really useful for artists beyond the four from this collaboration,” he says. Google Creative Labs works with professionals across disciplines in its “lab sessions,” or creative projects. Broadly, Carey says, “A lot of these tools that are coming out of Labs at Google are awesome glimpses of what the future of creation could be and ways to supplement the creative process.” [Image: Google] Google Creative Labs is looking to execute on that value proposition quickly with StyleDrop, and is currently exploring ways to integrate it into Google’s existing products for public use. “There are teams within Google looking at how to integrate it into products in a way we want to get out there quickly but also in a way that’s responsible and protects the work they’re using it [on],” Carey says.  Carey describes Infinite Wonderland as an experimental project (as an entity, that’s kind of Google Creative Lab’s bread and butter). So if it’s experimental, what were they hoping to discover? “The hope with this is to use everything that we’ve learned, and that the artists learned, about using this technology to make it better for them and people in their communities to build with tools and achieve things of scale that they might not have been able to before,” Carey says.  “Ultimately, yes, we’re gonna have this democratization of image-making,” Hayakawa says. “As an artist, I love the creation process. I’ve been doing this since I was a child. It’s what gives me a sense of purpose in my life. I came on to this project because I don’t dabble too much in AI, and I really wanted to understand what these tools can mean for me, because technology doesn’t stop for anybody.” If anything, it gets curiouser and curiouser.

Top Articles