Mojo for compiling ML models #416
Locked
waveworks-ai
started this conversation in
Ideas
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I’m curious about thoughts on how to use Mojo to define and compile machine learning models. What I mean here is not using the Modular inference engine, nor using Mojo to interface libraries like TensorFlow or PyTorch, but to define the actual model data flow and computations in Mojo itself and have them compiled to efficient binary code.
For example:
And then there would be efficient implementations of conv1d(), lstm() and dense() also written in Mojo. I’m particularly interested in how one would deal with model parameters (or maybe I should say trainable variables or weights, since “parameters” seems to have another meaning in Mojo - a bit unfortunate if you ask me). Ideally there could be a syntax to annotate trainable variables, for example:
I think such a syntax would be useful for two purposes:
For context, I designed (and wrote a compiler for) a small language called FL with exactly this feature (https://github.com/waveworks-ai/FL). This workflow has proven very efficient at my company, and now I’m hoping that we will be able to do the same with Mojo in one way or the other.
Thanks in advance for any thoughts or insights!
Beta Was this translation helpful? Give feedback.
All reactions