Making faces: How to train an AI on your face to create silly portraits
Face time — Making faces: How to train an AI on your face to create silly portraits Follow our step-by-step tutorial to paste your face onto anything your heart desires.
Shaun Hutchinson – Mar 22, 2023 11:30 am UTC Enlarge / Ever want to be a superhero? We’ll show you how.Shaun Hutchinson | Aurich Lawson | Stable Diffusion reader comments 56 with Share this story Share on Facebook Share on Twitter Share on Reddit
By now, you’ve read a lot about generative AI technologies such as Midjourney and Stable Diffusion, which translate text input into images in seconds. If you’re anything like me, you immediately wondered how you could use that technology to slap your face onto the Mona Lisa or Captain America. After all, who doesnt want to be Americas ass? Enlarge / The old way of doing things.
I have a long history of putting my face on things. Previously, doing so was a painstaking process of finding or taking a picture with the right angle and expression and then using Photoshop to graft my face onto the original. While I considered the results demented yet worthwhile, the process required a lot of time. But with Stable Diffusion and Dreambooth, Im now able to train a model on my face and then paste it onto anything my strange heart desires.
In this walkthrough, I’ll show you how to install Stable Diffusion locally on your computer, train Dreambooth on your face, and generate so many pictures of yourself that your friends and family will eventually block you to stop the deluge of silly photos. The entire process will take about two hours from start to finish, with the bulk of the time spent babysitting a Google Colab notebook while it trains on your images. Advertisement
Before we begin, a couple of notes: System specs
For this walkthrough, I’m working on a Windows computer with an Nvidia 3080Ti that has 12GB VRAM. To run Stable Diffusion, you should have an Nvidia graphics card with a minimum of 4GB of video RAM. Stable Diffusion can run on Linux systems, Macs that have an M1 or M2 chip, and AMD GPUs, and you can generate images using only the CPU. Those methods require some tinkering, though, so for the purposes of this walkthrough, a Windows machine with an Nvidia GPU is preferred. Ethical concerns
Further ReadingAI image generation tech can now create life-wrecking deepfakes with easeWhen it comes to generative image programs like Stable Diffusion, there are ethical concerns I feel I should acknowledge. There are valid questions surrounding how the data used to train Stable Diffusion was gathered and whether it’s ethical to have trained the program on an artist’s work without their consent. It’s a big topic that’s outside the scope of this walkthrough. Personally, I use Stable Diffusion as an author to help me create quick character sketches, and its become an invaluable part of my process. I dont, however, think work created by Stable Diffusion should be commercialized, at least until we settle the ethical dilemmas and determine how to compensate artists who might have been exploited. And for the time being, I feel that Stable Diffusion should remain for personal use only.
Lastly, tech like Stable Diffusion is simultaneously exciting and terrifying. It’s exciting because it gives people like me, who peaked artistically with fingerpaints in kindergarten, the ability to create the images I imagine. But it’s terrifying because it can be used to create frighteningly realistic propaganda and deepfakes with the potential to ruin peoples lives. So you should only train Stable Diffusion on photos of yourself or someone who has given you consent. Period.
Now, who’s ready to do this? Page: 1 2 3 4 5 6 Next → reader comments 56 with Share this story Share on Facebook Share on Twitter Share on Reddit Advertisement Channel Ars Technica ← Previous story Next story → Related Stories Today on Ars