Overview
About This Project
This documentation provides an overview of the development and usage of cl-waffe, based on Common Lisp and mgl-mat. The primary goal of this project is:
- Flexible And Efficient Platform in 99% Pure Common Lisp.
- Make APIs Extensible as possible, enabling users not to depend the standard implementations.
- Eazy to optimize with Inlined Function.
This framework is designed to be user-friendly first, enabling both beginners and experts in the field of AI to take advantage of capabilities of powerful programming language, Common Lisp.
⚠️ The documentation is being rewritten and is currently only half complete.
This framework is still under development and experimental. If you are thinking on using it in your products, It would be wiser to use other libraries. True, the author of cl-waffe is not a expert of AI. (Also, not having cuda gpus, I can't test my framework on cuda.)
Links
Tutorial Notebooks (Written in Japanese)
Workloads
- Make Full optimized implementation of the standard nodes.
- Save And Restore Models with keeping compatibility with npz.
- 🎉 release cl-waffe v0.1
- Add more standard implementation of NNs, after the foundations are in place.
LLA Backend
cl-waffe's matrix operations are performed via mgl-mat, and mgl-mat uses LLA. Accordingly, cl-waffe's performance hinges on mgl-mat and LLA's performance. The most recommended one is OpenBLAS. Append following in your setup files (e.g.: ~/.roswell/init.lisp, ~/.sbclrc). For more details, visit the official repositories. LLA mgl-mat
(defvar *lla-configuration* '(:libraries ("/usr/local/opt/openblas/lib/libblas.dylib")))
When Memory Heap Is Exhasted?
The additional setting of dynamic-space-size would be required since training deep learning models consumes a lot of space. For Example, Roswell and SLIME respectively.
$ ros config set dynamic-space-size 4gb
(setq slime-lisp-implementations '(("sbcl" ("sbcl" "--dynamic-space-size" "4096"))))
should work. However, Improving memory usage is one of my concerns.