• Hacker News
  • new|
  • comments|
  • show|
  • ask|
  • jobs|
  • wwarner 8 hours

    Haven't finished this but for me it's so refreshing to read some science on deep learning and not just weird predictions.

  • vivzkestrel 7 hours

    - just a headsup

    - your links to the slides for deeplearning you did here https://sander.ai/2014/05/29/slides-meetup.html are broken

  • oliverx0 12 hours

    Does anyone have good resources into a more practical approach toward building diffusion models? I found the book by Rashka for Building an LLM from Scratch really helpful in understanding a lot of concepts behind LLMs, and I am looking for a similar resource for diffusion models

    throwaway219450 11 hours

    MIT's OCW is usually pretty reliable:

    https://www.practical-diffusion.org/lectures/

    There is more math-heavy https://diffusion.csail.mit.edu/2026/index.html

    Ifkaluva 12 hours

    I really liked Calvin Lou’s tutorial. It’s quite old at this point and probably not up to date, but I felt it was awesome for understanding the concepts

  • programjames 13 hours

    It is a good post, but is missing the connection to continuous normalizing flows. Diffusion models, flow matching, consistency models are biased approximations of continuous normalizing flows (which themselves have some slight biases, but less). Adversarial losses can somewhat help with bias (e.g. RL, GANs), but training those has issues.

    programjames 13 hours

    As explanation, something I wrote previously:

    The most common approach to modeling continuous distributions is to train a reversible model f that maps it to another continuous distribution P that is already known. The original image can be recovered by tracking the bits needed to encode its latent, as well as the reverse path:

      −log P(f(x)) − log|det ∂f/∂x (x)|
    
    This technique is known as normalizing flows, as usually a normal distribution is chosen for the known distribution. The second term can be a little hard to compute, so diffusion models approximate it by using a stochastic PDE for the mapping. When f is a solution to an ordinary differential equation,

      dx/dt = g(x)
    
    then

      log|det ∂f/∂x (x)| = ∫ Tr(∂g(x)/∂x) dt = ∫ E_{ε∼N(0,I)} [εᵀ ∂g(x)/∂x ε] dt
    
    The last equality is known as Hutchison's estimator. Switching to a stochastic PDE

      dx′ = g(x′)dt + ε(t)dW
    
    and tracking the difference δx = x′ − x, the mean-squared error approximately satisfies

      d(δxᵀδx)/dt = 2δxᵀ ∂g(x)/∂x δx,
    
    which is close to Hutchinson's estimator, but weighted a little strange.

    benanne 13 hours

    I briefly covered that connection in an earlier blog post: https://sander.ai/2023/07/20/perspectives.html#flow ... but it's definitely something that might deserve a longer-form treatment at some point :)

  • darshanmakwana 15 hours

    This is way outside of my expertise, can anyone given a TL;DR or ai;dr?

    anvuong 15 hours

    This provides a high-level overview of diffusion models, you know, the models behind Stable Diffusion, Gemini banana, etc.

    I haven't read it carefully but I think it's pretty comprehensive. From SDE to Flow matching formulation, and different perspective of constructing the flow maps, i.e. x-formulation or v-formulation. It also deals with distillation and consistency, which is used to fast sampling.

    Overall, it's a good read if you are new to the field.

    mxwsn 15 hours

    Diffusion and flow matching models generate samples by iterative denoising. Iterative denoising means passing input to the neural network, running a forward pass, and taking the output back as input and rerunning the neural network. Often you do this 100 times, which is slow and expensive.

    Flow maps / consistency models / shortcut models instead try to learn to compress this iterative work into 1 forward pass. This makes inference 100x faster as you'd only need to run the neural net forward pass once. Beyond speeding up inference, there are other advanced benefits to this, such as improved ability to perform inference-time steering.

    Mathematically, learning a flow map corresponds to learning to solve an ordinary differential equation, i.e., learning the time integral of the velocity field. This mathematical foundation provides the basis for various training objectives for learning flow maps, which involve self-referential identities or identities such as the transport equation, which are discussed in the blog post.

    Hope that helps! I'm an ML researcher currently researching flow maps.

    darshanmakwana 5 hours

    Thanks! this was very helpful

    richard___ 14 hours

    Why is self-distillation necessary? Why can't they get the ground-truth for "skipping" steps?

    cshimmin 14 hours

    Very helpful! Naïve question (I haven’t had a chance to read TFA at all and diffusion/flow models are not my area of expertise). Doesn’t learning the integral/solution of the diffusion process in a single pass just take us back to like OG generative CNN that we had before diffusion models took over? Surely the answer is “no” but would love to hear your framing as to why.

    benanne 13 hours

    It kind of does! In the modern era of generative modelling, it seems like we rely on pre-training to capture the data distribution, and then on post-training (and various other tricks) to carve out a sliver of that distribution that we actually care about (i.e. what we want our model to generate).

    To be able to specify that subset with relatively few examples, a good high-level understanding of the data distribution is necessary. The way I see this, is that training a diffusion model gets you to that point, and then once you've selected the part of the distribution you actually care about, you can distill it down quite aggressively, because you no longer need all of that computation to model a much simpler distribution (sometimes all the way to one step, but usually it's a few steps in practice).

    refulgentis 15 hours

    Why not put it into an AI yourself? :) I'd rather we avoided a precedent of asking for it and N people replying with their own favorite AI version. The comments section would end up a ghost town.

    Extreme TL;DR: Diffusion models are like getting f(x) by calculating and summing f'(0), f'(1)...f'(x). Flow models are like just calculating f(x).

    tekacs 13 hours

    We've all seen that AI can give you plausible but incorrect answers. Having an expert read it or use AI on it and interpret and validate it before posting would be most welcome IMO.

    refulgentis 4 hours

    Did you see him ask for the AI;DR?

    9 hours

    _doctor_love 15 hours

    HN is a place where it's legitimate to ask those kinds of questions. The site has a high concentration of advanced practitioners -- in my experience it is not uncommon for the creator of a technology or deep expert to reply. John Carmack has an account on the site for instance. :)

    refulgentis 13 hours

    [dead]

    charcircuit 14 hours

    Why take a gamble hoping that one of those experts takes the time to reply to you when you can instantly get an answer by asking AI?

    pipe2devnull 13 hours

    Maybe someone wants to hear from an actual human rather than risk hearing a plausible but potentially incorrect answer from AI