Stephan Froede
1 min readJan 18, 2025

--

Made a similar observation regarding costs, a straightforward tutorial with LangGraph (July 2024) caused 8 USD.

However, cheaper models like Gemini Flash/Pro or Mistral.ai can reduce costs. It is questionable if you need a massive model for this use case, but a fast mid-sized model like Nemo 12b is good enough.

Of course, that does not solve the AI agent problem. IMO, the assumption that an AI agent (LLM) has to decompose a task into a sub-task systematically does not make the AI agent an agent; it is just a glorified prompt chain.

Heuristic use cases - like discovering a trading strategy - might not be perfectly suitable for an AI agent approach. One issue, for example, is that the LLM permanently resides in its box, re-combining existing knowledge, including financials or news from the past. You can solve this using an LLM with a significant context window, like Mistral or Gemini.

But, the LLM will still fail to discover a good strategy because the problem has no axiomatic truth. The market proves a trading strategy's correctness (truth level), but the LLM cannot do that.

--

--

Stephan Froede
Stephan Froede

Written by Stephan Froede

website: https://unimatrixz.com (blog about metaverse and related topics)

No responses yet