I use a lot of open source APIs, and it’s been my experience that some have documentation that any LLM will have no problem beating in almost any use case. Not only that, but many of the older APIs are painfully slow. One in particular took a day to do what a newer open source model did in seconds. However, the newer model had horrible documentation. Moreover, aside from diving into its code on git, which I didn’t do, the newer model didn’t indicate how it developed it’s output. It wasn’t even named after its algorithm. It wasn’t a language model. It was a fb timeseries API.
Is this experience a sign of open-source’s future?