Uncategorized

Illustrate Your Prototype with the Help of A.I.

Note from the future (Jan 2023): This article was originally written over a year ago, in December 2021. Since that date, a seismic shift has happened in the A.I. image-generating industry. A lot of this article still stands, but I have added an addendum to the end of it addressing some of the more recent changes.

Back in 2017, I wrote a post about prototyping your board game and suggested that designers should hold off commissioning artwork until they are ready to publish.

In that post, I spoke about all the options available when arting up your game, from stock websites to drawing it yourself.

Some of the art done in crayon for our shelved game, Tempest: Shards of the Gods.
Our cards with a bit of hand-drawn crayon art added, using The GIMP to create a design.

Sometimes though, you need something super quick, and you don’t want to go through a million art websites to find something of sufficient resolution to use. Plus, you want all of your art to look consistent across your game, and you don’t want players looking at one of those art pieces and saying “Oh hey, that’s a piece of fan art for Critical Role!”

Enter today’s contender: Wombo Art’s “Dream”.

This is a completely free app, available on your phone or with a web implementation, that uses AI driven “dreaming” to create artwork from internet search results.

So let’s say I wanted to create those above cards using art generated from Dream. I go to their website, type in “Norse warrior” as the prompt, select “Fantasy Art” as the art style, and get … well … the results can sometimes be a bit uncanny, so you might need to generate it a couple of times until you get something less surreal.

If you squint, you can see where the AI is getting its inspiration from.

That one on the right looks pretty good. So now I want to make the Einherjar – also Norse warriors, but ghostly – so I keep “Norse warrior” but I swap the style, playing around with “Dark Fantasy”, “Psychic”, “Vibrant” and “Synth Wave” to see what comes up.

Every style has its own predetermined requirements (like Synth Wave always has spaceships!)

And just for kicks, let’s throw “Norse ghost warrior” in as a prompt and see.

Nailing the ghostly look.

Well hot dang. Those are some sweet illustrations. And it took me less than five minutes to prompt the AI software to create them. The good thing is, while you are working on one illustration, you sometimes accidentally come across one that works for another card, like this one that came up when I was searching “Ghost Warrior” in the Fantasy style, but thought it would make a great Man-at-arms or Shieldmaiden instead.

Looks like that photograph of Lagertha from Vikings almost.

On top of that, Dream has a variety of art styles available, so it’ll work for your horror, fantasy, sci-fi/cyberpunk, steampunk or even Asian-myth game.

Now, as you can already guess, the program does have some limitations. It only seems to create artworks in a portrait orientation, and it is very very bad at creating anything single subject and organic. If I wanted four cards representing a Jungle Temple, then I could very easily create that with the app using the prompt “Jungle Temple”, the fantasy style, and running it four times. If I wanted to differentiate those further, I could alter the prompt with words like “Jungle Waterfall Temple” or “Jungle Flower Temple”.

But give it the prompt of “Taylor Swift” and it will give you the idea or concept of Tay Tay. It looks more like a dream of a popstar than the real deal. But, for prototypes, it is still quick and effective, and as you saw from the Einherjar searches, it will eventually give you something close enough to work in card game format. And while the art itself has a distinct style, that can also be a downside, because some people might not like it.

I guess this could be TayTay … But, like, in a real cyberpunk NFT sort of way.

The other downside is, it is only usable as prototype art. As the app’s terms of service say: The Service and its original content, features and functionality are and will remain the exclusive property of WOMBO and its licensors. So you aren’t going to be able to use this art for your final game. But then, you weren’t going to be able to use clipart either.

However, it can be a useful tool for your game designing, when you need something rapidly that fits with your existing aesthetic and is going to be close to what you are looking for. When you are adding or tweaking design elements later, you will be able to craft up new artworks that still fit the rest of your game, resulting in something that looks like it was painted to integrate together and fade into the background, rather than a Frankenstein’s Monster-like pastiche that can distract players from the gameplay – which is really what you want for a prototype. I found it works exceptionally well for backgrounds, locations, card frames and that sort of thing, where you would normally have very low-detail plein air vibes.

(Also, as a bonus, you can use it for your home tabletop roleplaying games when your players go off your carefully prepared path and end up in a new city, and one of them asks what it looks like!)

“Ummmm, I guess it looks like Seattle but icy?”

So there you go! I hope that helps. 🙂

Addendum from the future (2023):

Developments in A.I. image generation have come so far since 2021. These days, new contenders like MidJourney, DALL-E2 and Stable Diffusion hold equal place on the market with the potential to produce high-quality images rapidly and completely free (aside from a subscription to some). Further, programs like Stable Diffusion can be specifically trained in an image-set style to get consistent images (which is essential for a finished commercial product – look at the reaction to Terraforming Mars‘s inconsistent imagery, although, that hasn’t hurt its success). It can be tricky to learn how to get consistent images, and it takes some time to work through variations of images in a program like MidJourney to get the results you like, however, they also provide way better images than Wombo’s Dream did, when I first wrote this article.

This can be incredible for everything I describe above: creating a prototype for your game. It can also be useful for creating textures when graphic designing. For example, you can use the prompt “–ar 5:7” on MidJourney to get a roughly card-shaped aspect ratio.

Just as I mentioned above with Wombo’s terms and conditions though, the copyright of images created using these prompts is murky at best. There are multiple instances of artists identifying fragments of their own work in the produced images of others, and there are clear signs of these works being produced by the AI software (for example, you can frequently find “signatures” in the bottom-right corner of images generated from MidJourney; and the Lensa app frequently displays artifacts that resemble a notifications/power bar at the top, indicating it was trained partially off of uncropped phone screenshots).

Some of the glitches in Lensa look suspiciously like it was trained on screenshots.

As such, unless you have gone to the trouble of training Stable Diffusion specifically on art to which you hold the license or copyright (or from the public domain), you should consider that you do not own the copyright to images produced through these systems. And, even if you did, the public perception of A.I.-generated images is such right now that you would find yourself on the wrong side of a P.R. disaster should such an accusation be accurately levelled at you.

A.I. image generation is still incredibly useful for quickly arting up a prototype, and I think it will become a useful tool for artists overpainting and iterating in future. However, for the moment, it is best to leave it at that.

Addendum from the future (2024):

Wow. Nothing has quite taken the art and design world by storm such as the use of A.I. art in creative ventures. There are so many moral implications of their use being explored and argued, and big companies are frequently defending themselves in the public space not just for deliberate use of A.I. art but in accidental use (by freelancers hired by the company or by stock art that is relied upon but contains undeclared A.I. image use). Further, companies like Adobe are attempting to code their A.I. software as ‘clean’ – art that is created by A.I. but in a way that supposedly doesn’t infringe on the copyright of art that has been scrubbed. Still, in some ways all A.I. art is tainted by its early days of training, and it’s hard to see that any A.I. art can ever be ‘clean’ in this way.

For published works, I think A.I. art is a no-go. Even ‘clean’ art erases a human worker from the process and takes a job away from someone whose creativity lends a spark to the finished product. For unpublished works in playtest or pitching, or to aid in describing a desired style or composition to an artist, I think there is still a place for A.I. art.

However, there are two things to add. First, you shouldn’t discount the hand-sketched composition to give your artists an idea. No matter how bad you might think you are at art, in some ways your unfiltered, bare-basics art is excellent for this, because it doesn’t narrow your artist’s perceptions of how you want the art to look. It means you are getting more of your artist in the finished pieces. Second, you should be extra cautious of artists you hire, and ensure in the strictest terms in your agreements with them that they are not using A.I. art in any way in their compositions. Morality aside (and I think there are arguments to be made for tracing over A.I. art for poses, the same as you might trace over a Poser model, a stock image, or a posing dummy), the negative publicity for discovery isn’t worth the shortcut.