Hacker News .hnnew | past | comments | ask | show | jobs | submit | vipermu's commentslogin

hey hn! I'm one of the founders at Krea.

we prepared a blogpost about how we trained FLUX Krea if you're interested in learning more: https://www.krea.ai/blog/flux-krea-open-source-release


Off topic but did you really hide scroll bars on the website? Why...?

  .scrollbar-hide {
    -ms-overflow-style: none;
    scrollbar-width: none;
  }


They probably did it because the website might look better without a scrollbar, but they should realize that many browsers hide the scrollbar and they only get displayed when you hover over or when you start scrolling. That said, the scrollbar is always there for me (unless hidden by CSS), and I would not have minded it at all.


UI brought to you by vibe code


nah, Krea is just from that side of design twitter where you don't uppercase letters and you can break the rules sometimes. very atypography-coded.


hey hn! I'm one of the founders at Krea.

we prepared a blogpost about how we trained FLUX Krea if you're interested in learning more: https://www.krea.ai/blog/flux-krea-open-source-release


exactly


how will this change with ai though?


It already uses AI so I'm not sure what you mean.


it would be interesting to play with LCM-LoRAs and AnimateDiff and see how much much this technique can speed up video generation.

not sure if it's possible to just plug-and-play it or if we would need an extra LCM-LoRA for the motion module.

once we have these sort of models producing frames in milliseconds we should be able to do something similar to this demo but with videos.


indeed; we're able to make it work with SDXL thanks to a new technique that got released yesterday called LCM-LoRA.

with LCM-LoRA you can turn models like SDXL into LCMs without need for training and you can add other style LoRAs like the ones you find on civit.ai

in case you're interested, here's the technical report about LCM-LoRA: https://arxiv.org/abs/2311.05556


it uses a new technique called "consistency" that lets latent diffusion models to predict images in much fewer steps.

some links here: - https://arxiv.org/abs/2310.04378 - https://arxiv.org/abs/2311.05556


Here's a brief explanation of ControlNet, a new method that can be used to control large diffusion models in arbitrary conditions, such as image edges, human poses, or segmentation maps.


really cool! that must have been a lot of work. Here's another great site for references: https://proximacentaurib.notion.site/proximacentaurib/parrot...


Thanks for the kind words. I collected them and built the site over ~2 months. The financial cost to run the prompts on DALL•E 2 was the hardest part!


This is a tangent, but what are your thoughts about the word on this image?

https://generrated.com/?prompt=digitalPainting&subject=anxie...

It seems pretty on-topic. You've seen a lot of images, so I'm curious if that happens often? Or how significant you think it is.


Good question. All the generations in the dataset were generated with version 1.3.


Could you somehow indicate this in the web ui of krea.ai? Maybe when there are more versions

(Why aren't 1.4 images part of the dataset? Someone said they are public too)


We’re in the process of adding them


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: