Skip to content

Training Stable Diffusion LoRA on Apple Silicon M2 Mac GPUs (Metal)

Environment install Suggested to work in a Python virtual environment (Here, the Python version is Python 3.10.6.). Setup the virtual environment as follows. Accelerator Settings Prepare data for training See the distributor’s description for details . It can be created anywhere, but follow the directory structure and naming conventions as explained in the distribution source .Make sure the image file name and type are appropriate. training run kohya_gui.pyI want to use , but I didn’t have the energy to solve the error around the UI, so I used CUI for the time being. ( Check if there are any extra spaces after copy andpaste )zsh: no such file or directory\ The above arguments have been adjusted to work for the time being, so adjustments are required for efficient learning.However, if you try to use fp16 or AdamW8bit, it probably won’t work, so be careful.Also, since the execution environment is M1Max (64GB), it may be necessary to adjust the memory area… For other arguments, refer to the distribution source page.https://github.com/kohya-ss/sd-scripts/blob/main/train_README-en.mdhttps://github.com/kohya-ss/sd-scripts/blob/main/train_network_README-en.md When you want to generate intermediate progress sample images during learning Add arguments to command to start learning Save the generated prompts in a text file. Since the code for generation is for cuda, kohya_ss/library/train_util.pyrewrite the part written as cuda inside to mps and modify it for mps. (I passed easily by replacing the text editor) change everything gpu to mps Pytorch version note When using versions around Pytorch 1.13.0, there is a bug that Loss becomes NaN.https://github.com/pytorch/pytorch/issues/88331 It’s fixed now, so upgrade. Use learned LoRA outputsThe trained model is saved in the directory specified by output_dir (here ).After that, you can use LoRA normally using web-ui etc.

Stable Diffusion – Train Own Face

This are the steps how I train my own face in Stable Diffusion. Images requirements: Load a base SD checkpoint (SD 1.5 or SD 2.1 768 for example) on automatic1111 before starting (custom models can sometimes generate really bad results) start training. for me it takes about ~25 minutes to train up to 5k steps. While training, you can check the progress in the “textual_inversion/date/name of embedding/images” folder. Every 50 steps an image is created here. If after 1000 steps the likeness of your subject isn’t appearing, then cancel the training and check what you did wrong. If you’re happy with your results you can then play around with the prompt template file. Add details to it like “a photo of the beautiful [name] woman with brown eyes and blond hair”. if you feel that the inversion could be trained a little more, just go back into the training tab and increase the maxsteps to 8k/12k/15k and continue training (no need to start over). Also remember that you can pick any of the generated files in “textual_inversion/date/name of embedding/image_embeddings” and move them into the embeddings folder. then you can add to your prompt the name of the embedding (e.g. embdname-4500 ) to check if it works better. First try to get something that gets good stable results. Then try to play with the prompt file (add descriptions), or different checkpoints (although, beware, most will generate just garbage), or number of vectors (to increase embedding strength). Good luck. Extra Worth to check: Train Your Own Stable Diffusion Model Locally — No Code Neededhttps://betterprogramming.pub/train-your-own-stable-diffusion-model-locally-no-code-needed-36f943825d23 Another reference https://www.reddit.com/r/StableDiffusion/comments/10ettxl/comment/j4un0zk/

The “Touch it Once” Principle

It’s surprisingly easy to fall into inefficient routines, like indulging in junk food, binge-watching Netflix, or starting your day by checking emails. We all find ourselves engaging in these unhealthy and unproductive behaviors from time to time. To become more productive with our time, one approach is to identify and uproot these “bad” habits. The process is relatively straightforward: I’ve delved deeper into the process of uprooting and replacing bad habits in a more comprehensive article, which you can find here [include link to the article]. Alternatively, another path to enhanced productivity is learning and applying fundamental principles of effective time and energy management. Both methods, eliminating bad habits and incorporating productivity fundamentals, have proven to be effective in upgrading personal systems. By making these changes, we can move closer to our desired state—a place where we consistently take action towards our goals, achieve more results in less time, and can even take guilt-free breaks knowing we’ve accomplished quality work during the hours we’ve dedicated. Unlike notable habit experts James Clear, BJ Fogg, and Charles Duhigg, I’ve found that starting with the fundamentals yields better results. Once we grasp these fundamental principles, we can establish rituals that solidify them into our daily routines. For most individuals, the fundamentals are surprisingly effective, requiring less willpower to implement while delivering significant wins early on. Let me share a personal example that highlights the power of fundamentals. As an information worker who spends a lot of time sitting, I realized the importance of taking mental and physical breaks throughout the day to boost cognitive performance. To accomplish this, I decided to pick up juggling as a fun activity to pause my work and get my blood flowing. I ordered three juggling balls from Amazon and began my journey. However, for several weeks, I…