Luce Megakernal: Why nobody is taking about this?
Mirrored from r/LocalLLaMA for archival readability. Support the source by reading on the original site.
| Everyone has been taking about Luce DFlash and PFlash. I just came across their megakernal and it seems it was released along with Dflash and PFlash. It seems it's giving them 1.8x greater speed with much more power efficiency on nvidia gpu comparable to the efficacy achieved on apple silicon! How's it that nobody is talking about this? They say that they developed a method of avoiding cpu despatches between every layer boundaries. In lcpp, there are about 100 kernal launches per token for CUDA implementation. The amount of power being used is crazy especially as people are using powerful multi gpu setup. Isn't this really huge? Am I missing something? Doesn't lcpp have fused delta kernal? Is this similar to it? I remember reading about it but I don't know what's the status of it now. [link] [comments] |
Discussion (0)
Sign in to join the discussion. Free account, 30 seconds — email code or GitHub.
Sign in →No comments yet. Sign in and be the first to say something.