Welcome to Kushville

I asked a robot to make me a weed utopia... Here's how you can too.

Words and imagery by Amy Paterson

Much to the horror of anyone who found Black Mirror too uncomfortable to watch, 2023 is turning out to be a big year for artificial intelligence.

Over the past few months programmes like ChatGPT (a language-processing bot that instantaneously answers questions and writes text through word prompting) have filtered into the mainstream, and are now fundamentally and permanently changing how the masses work, write and relate to technology.

As a writer myself I struggle with the thought of surrendering control of my words and ideas to a robot - if only because it has taken years to hone the skill of creating a framework of thoughts and sentences, and maybe also because I actually enjoy it.

But creating art? Eh… not so much.

In university I would make every ridiculous excuse to get out of attending a friend’s monthly life drawing class, and I avoid Pictionary at all costs. I can’t go to the Clay Cafe without a Pinterest reference printed out, because when I didn’t I got so flustered figuring out what to make that I just painted the whole vase green in a panic. 

And while some people find making art meditative and liberating, for me it is incredibly frustrating and stressful to have a vision or idea in mind that I can’t execute because I lack the ability, time and (let’s be honest) motivation. I love taking photos as a hobby, but the kinds of images you can capture are determined by budget, location, subjects, lighting… If only these limitations could somehow be overcome?

Well, AI art generation has entered the chat.

A few weeks ago when doom scrolling Instagram at 1am I stumbled upon Planet Fantastique - an account run by a professional art director who uses AI to generate mind-blowingly beautiful photos depicting retrofuturistic, pastel-hued aesthetic worlds complete with their own architecture, fashion and kooky citizens. And my mind just could not believe what it was seeing.

There’s Marigold, the orange and yellow flower powered midwest town. The city of Promise, a sixties-meets-sci fi pink urban fever dream. Thousand Palms, a whacked-out take on the sunny suburbia of Palm Springs, and La La Land, an intergalactic mod-style TV studio set. Every single image is unnervingly lifelike, incredibly imaginative and so different to anything you’ve ever seen or even thought of before. Every single frame is a delicious slice of what the creator calls “dream photography” - and every single one is created by a bot.

So with visions of utopian cannabis-themed scenes in mind, I was quickly convinced to test out this new tech for myself.  Long story short, I was not disappointed - and I stand before you an unabashed AI convert and aspiring AI artist, reporting back here like a proud student at a science fair enthusiastically offering up my results for all to see. 

Never thought I’d be the one to say this, but here’s a basic how-to in bringing your artistic vision to life… with the help of AI.

_________________________________________________________________________

PICK YOUR PROGRAM

When it comes to AI art generation, the current frontrunners are the perfectly named DALL-E by OpenAI (the Elon Musk–owned company who developed ChatGPT) and Midjourney, an independent research lab.

Hearing that Midjourney was trickier to use, I decided to give Dall-E a try first. The site’s design is definitely slick and very simple to use, giving you 50 free credits following a swift sign up process plus 15 credits free per month thereafter. All you have to do is type a detailed (but direct) description in the prompt box, and wait a few seconds before it spits out 4 diverse iterations of an image based on that description back at you.

If there are ones you like you can download them directly, or if you’re into one of them but it’s not quiiite right, you can ask the bot to generate a further 4 variations of that particular image. If you don’t like any then you just enter the prompt again, maybe reshuffling your words or changing your adjectives to get different results.

While I definitely had some fun with Dall-E, mocking up some cute watercolours of gals smoking weed in the style of different painters, it quickly became obvious that it was not going to help me achieve the eerily lifelike, uncanny valley feel of the Planet Fantastique micro worlds. And when my brother-in-law saw my reference, he was quick to confirm that Midjourney would give me what I was after.

Unlike DALL-E, Midjourney doesn’t operate as its own site but rather as a channel through the Discord social messaging app. Downloading the app and setting up the channel was relatively painless, after which you need to join one of the many “newbie” chat rooms to start your prompting, and you get started off with roughly 25 free credits before you hit a (relatively inexpensive) pay wall.

Midjourney is also different in that while you can just plug your prompt directly into DALL-E, once you’re on Discord and in one of the Midjourney channel chat rooms, you need to first start typing in /imagine in the search bar at the bottom, clicking on the /imagine prompt that appears directly above to autofill the command, and only then can you start writing your prompt caption. Hit enter, and just like DALL-E the bot will soon generate four images based on your word prompt.

But be warned - this is where Midjourney gets a bit messy and not all that user friendly, because as people in the chat room continue to type in and hit enter on their requests below, yours will get lost higher up in the list and you’ll need to keep scrolling up to find it. Once the images have generated to 100% your prompt and results will briefly pop to the bottom again, but then start moving upwards again as more requests from other users roll in.

It’s not rocket science to keep track of your work, but it definitely is a lot less tidy and tailored to individuals than the DALL-E interface. But on the upside, it feels like you can do a lot more with what you get from the bot on Midjourney.

Under each result (4 images) you will see a row of 4 buttons U1-U4, and below that a second row of buttons V1-V4. These correspond left-to-right with the 4 images you receive, 1 & 2 being the top row and 3 & 4 being the bottom row. U stands for Upscale, which allows you to expand an image you like to a higher quality to download, while V stands for Variation, and will get the bot to generate 4 further slightly altered versions of that selected image. There is also a ‘reroll’ button, which will generate 4 new images if you don’t like anything 

As promised, the images from Midjourney had a completely different look to the ones on DALL-E, especially for photographic style images which have a significantly more lifelike quality to them. Because of this, Midjourney seems to have a lot more stylistic range than DALL-E, and is also particularly skillful at imitating the photographic and cinematic styles of famous photographers and directors.

THE ART OF PROMPTING

As I rapidly got hooked on my new hobby and realised how much weight each word in your prompt can have on the outcome of your images, I quickly started to wish there was a course I could do to maximise my prompt efficiency.

But the closest thing I could find to a course was this guide on writing prompts for Midjourney specifically, and it proved to be a great starting block for creating Kushville.

Taking Planet Fantastique as my baseline inspiration, I mostly used words like “retro” “futuristic” “sci fi” and “utopian”, as well as “realistic” “hyper real” or “lifelike” to get less rendered faces, then including “street photography” for a more candid sensibility to the angles and backgrounds. 

As is recommended, I also always added an artist reference “in the style of William Eggleston/Vivian Maier” to the end of the prompt, in the hopes of achieving aesthetic consistency through my overall collection of images. And then, to give things a bit of a 420 feel, I peppered in “cannabis” “marijuana” “green” and “smoking” depending on the subject (e.g. “two cannabis-themed air hostesses” or  “women on streets of a cannabis utopia”).

FYI, it’s really not necessary to formulate full sentences using conjunctions, articles and prepositions. Try to keep your terms as directly and narrowly descriptive as possible, right down to the finer details: “two men in suits and green hats playing chess in retro futuristic cannabis utopia, realistic photograph in the style of Vivian Maier.”

BLIPS & BLIND SPOTS

It is probably worth noting at this point that AI art generation is very far from an exact science, and there are two major snags you’ll notice pretty quickly with the images both DALL-E and Midjourney create.

The first is that whenever writing is featured in pictures it is almost entirely gibberish text - which makes sense when you realise this means the system can’t yet be used for branding or advertising purposes (see: corporate gain), and there are probably some copyright issues influencing this too.

The second glaringly obvious snag is that the people in the images, while spookily lifelike in some ways, can also very often turn out creepily warped and glaringly artificial. AI image tools particularly seem to struggle with rendering realistic hands and facial/body proportions because these details are pulled and stitched together as an approximation from an infinite number of references.

And while it does give many of the people in the artworks a kind of cool, subtly surreal and very distinct look - maybe even making them a little easier to digest because you subconsciously know they’re fake - they can also be downright freaky.

It is very common to get something like a single hand with 8 mangled fingers all pointing in different directions as they attempt to grasp a cigarette, because these AI outputs have not yet been refined to the point that they will generate a perfect likeness of that level of detail. Often faces are overly angular or have huge foreheads, and eyes are another regular issue, with many of them ridiculously squint or fully black/white as if you’d used a red-eye fixer on them.

So common are these blips that, without throwing any shade (the images are no less amazing for it), I am pretty sure that the mastermind behind Planet Fantastique either uses a more advanced bot or edits the faces of her generated images in Photoshop to achieve the perfectly proportioned faces of her dreamscape subjects. There isn’t a surplus limb or skew eyeball in sight.

Another weird thing I picked up on is that you seem to have to specifically prompt for the people in your images to be smiling or look happy. Perhaps it was just the prompts I used (or the ones I didn’t), but even with descriptions that featured the words “happy” or “laughing” a lot of the generations I received back from the bot featured faces with very sullen, serious expressions.

The word “cannabis” itself also proved to be a bit tricky to incorporate into the images, with entirely different plants and sometimes just the colour green substituting for a feature of the thing itself, while for whatever reason terms like “bong” and “joint” turn up strange renders and really just seem to go over the bot’s head at this point. 

________________________________________________________________________

Inevitably these are all kinks that will be ironed out with time, though. For now it’s just fun to enjoy the weird and wonderful results, and embrace AI’s imperfect interpretations before it soon gets too close to tell the reality from the render.

Plus, a point that should probably have been made earlier is that AI art modelling is a top tier activity for when you’re stoned. That lucky packet feeling of hitting enter on a prompt and seeing your ideas spring so quickly to life in ways you could never imagine is amazing. Super satisfying, often awe-inspiring, and totally hilarious when they slightly miss the mark.

And the big question of whether these images actually constitute artwork?

An interesting conundrum to consider, because while the bot may be doing the leg work you are ultimately the puppet master when it comes to prompting. But at this point that’s maybe not even a debate necessarily worth having, in the name of just having fun and becoming better acquainted with what is ultimately radical and very powerful technology that will soon become part of our everyday lives.

Because let’s face it - while AI may be having a big year now, it is going to be the first of many to come for the rest of our lives, as technology continues to innovate, improve and evolve at the speed of light. And you know what they say, if you can’t beat ‘em, join ‘em in bringing your wildest imagination to life.

Leave a comment

All comments are moderated before being published